![]()
It would then be considered in the solver as a package with a strict pinning.įor cuda, it means that we need to decide what this package name should be. Its version and build string can be dynamically determined by having conda run some code for that particular virtual package. I think we agreed that conda needs “virtual packages” which has been lumping in with “markers” but which I think are actually separate.Ī virtual package is something that represents some aspect of the system. I had a talk with yesterday about what he thinks he needs from conda to support this. It would go a long way toward unifying the various approaches as well as improving the user experience. I don’t know where this work is on the roadmap for conda ( but if there is additional work needed on the conda side to get this to the finish line, I’m happy to help. That would then give everyone a standard way to refer to the system CUDA dependency. This was for the conda team to potentially incorporate into conda as a “marker” (I think that is the right term), so that conda could include a cuda package with a version given by this function in the dependency solver. #Nvidia cuda toolkit 10.0 wont install driver#The CUDA driver provides a C API to query what maximum version of CUDA is supported by the driver, so a few months ago I wrote a self-contained Python function for detecting what version of CUDA (if any) is present on the system: #Nvidia cuda toolkit 10.0 wont install install#This allows you to force a specific CUDA version this way: conda install pytorch cudatoolkit=8.0Īnd that will get you a PyTorch compiled with CUDA 8, rather than something else. So for packages in Anaconda that require CUDA, we make them depend on a specific cudatoolkit version. These two conflicting desires for compatibility and performance explain why it makes sense to compile packages with a range of CUDA versions (right now, I’d say 8.0-10 or 9.0 to 10.0 would be the best choice), but still leaves the burden on the user to know which CUDA version they need.īecause nearly all CUDA projects require the CUDA toolkit libraries, and Anaconda packages them, we use the cudatoolkit package as our CUDA version marker. As an example, TensorFlow compiled with CUDA 8 can take 10+ minutes to start up on a Volta GPU. A package compiled for CUDA 8 will not run on Volta GPUs without a lengthy JIT recompilation of all the CUDA functions in the project, which happens automatically, but can still be a bad user experience. Aside from new CUDA language features (which project may choose to ignore for compatibility reasons), building with newer CUDA versions can also improve performance as well as add native support for newer hardware. So, on one hand, there is motivation (much like glibc) to pick and arbitrary old CUDA and build everything with that, and rely on driver backward compatibility. This backward compatibility also extends to the cudatoolkit (the userspace libraries supplied by NVIDIA which Anaconda already packages), where a conda environment with cudatoolkit 8.0 would work just fine with a system that has the CUDA 9.2 drivers. So, for example, the CUDA 9.2 build of PyTorch would only require that CUDA >= 9.2 is present on the system. #Nvidia cuda toolkit 10.0 wont install drivers#CUDA drivers (the part that conda cannot install) are backward compatible with applications compiled with older versions of CUDA. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |