I’m trying to not use the conda env. For that reason I removed the condaConfig and added the path to my virtual enviroment. However this does not work. Does anyone know why?
I get the error:
Unable to resolve interpreter for:
Do not manually manage environments and do not use venv. Our rcc will take care of environment management if you let it. All you have to do is allow our tooling to create robots for you, and then you add your dependencies to conda.yaml there.
Is there good reason for not using conda? I’m just curious, because this is valuable information to us also.
And yes, of course you can use your own setup, but there we cannot much help you, since most of our tooling is based on declarative, repeatable, isolated, and clean environments created using micromamba (conda) and pip. And keeping those environments that way automatically, not manually, using our open source tool, rcc.
And doing that with venv is not support on our cloud platform. But you can setup your local machines or virtual machines or containers your self, and then run Workforce Agent there, or try to use our developer tools. But be warned, it might be uphill battle.
Or maybe you could try out our tooling and give us valuable feedback what we are missing against your self managed venv environments.
Thanks for the feedback.
In my team we don’t use conda. I tried to use conda as my environment, but it was not working. It did not download all necessary files. For that reason, I was trying to do it with venv.
I’ll chime in with a use case here for allowing VS code to use an existing environment that might be informative/useful here. In my particular case, I’m using mamba already but I need to create the environment with my own tooling.
I’m trying to build a robot library that uses some other AI stuff we wrote. In order to do the development conveniently, I need the robot I’m building to have access to that AI code (which is an existing python package) in development/editable mode. What we do outside your VS Code plugin is something akin to
so we can edit and see the code changes quickly. I want to be able to do the same thing in my library. E.g.
# in package my_ai
from robot.api.deco import keyword, library
from the_ai_in_dev_mode import CoolAI
@library
class AI:
@keyword('Impress me')
def impress(self):
return CoolAI().impress_me()
And then:
*** Settings ***
Library my_ai.AI
*** Tasks ***
Do AI
my_ai.AI.Impress me
Now, I can’t do that because I can’t use the conda.yml setup to install my other package in dev mode (I can’t use the -rrequirements hack because your environment-building tooling doesn’t copy the file across). Very specifically, I made an environment and everything works fine in it, insofar as I can run robot <path to that ^ .robot file> and it’ll work. However, when I try to run that in VS Code, it fails because it is overriding my environment with the .holotree one. This is what I see in the terminal that pops up when I run a task:
I could of course hack around that by calling /Users/matt/.robocorp/holotree/b1f3c244e_fe22aa9b/bin/python setup.py develop directly and working around it that way; it’s just awkward. Essentially, everything would be great if I could just tell it to use my own environment.
So anyway, chiming in here with a use case for using a non-robot.yaml managed environment
First of all, if tooling is working against you, then do not use it. For library development work you might want to use something like poetry for example.
And I probably do not have solution for you, but this is what I would do:
create a new robot, lets call it “coolaidev” (using rcc create coolaidev on some working basedir)
inside that project, I would create a subdirectory called the-api-library (either directly or as git submodule)
then in robot.yaml I would add PYTHONPATH entry for that the-api-library (and optional PATH entry, if there are executables there …)
then I would put all development dependencies in conda.yaml including everything from you requirements.txt pip dependencies (but would move as much as possible as conda-forge dependencies [for improved environment creation perfomance])
bonus: you can also add your development tasks as robot tasks in robot.yaml and that way have repeatable way to operate on your code
you can also now enable that environment in your terminal [Mac and linux] using command source <(rcc holotree variables --space coolaidev --robot path/to/coolaidev/robot.yaml) or in Windows rcc holotree variables --space coolaidev --robot path/to/coolaidev/robot.yaml > coolaidevenv.bat and then doing call coolaidevenv.bat
What is not solved (yet, at least):
doing that python setup.py develop since there can be “anything” (that can also break/pollute isolated environment)
depending on what it does, you could use that “environment enabling” and run that python setup.py development there, but if it setting local things, they might not be visible elsewhere …
This might work also with VS Code, but have not tested it since I do not use or own VS code. I’m poor vim user.
So, for me, your use-case is right in how I use rcc in my daily life, but it is just my opinion (and way of working). And my view is that also development environments should be isolated and repeatable, not just end-user environments.