Comment
Author: Admin | 2025-04-28
Quite limiting. 16GB of VRAM is recommended for bigger OpenCL sims. April 23, 2021 6:23 p.m. jsmackVellum uses OpenCL. If you want to avoid OpenCL, don't use Vellum or other nodes that are OpenCL based. The error you report sounds like the native command queuing feature that was added in 18.5 that was incompatible with Nvidia Turning architecture until they fixed it in a driver update. Try installing an Nvidia driver from after November 2020 to see if the error goes away. I've also seen that error when the simulation doesn't fit into memory, which can happen if there's too many points or constraints. The RTX 2080 only has 8GB of VRAM, which can be quite limiting. 16GB of VRAM is recommended for bigger OpenCL sims.I was only trying to squash inflated torus. Simple 4 node simulation. In 3DsMax with TyFlow and CUDA engine I was capable of much more without hickups and with this same setup. Actually when I had 64GB RAM 3dsMax was handling way more than Houdini can at 128RAM. Idk I think Houdini is really really bad with optimizing resources. I am able to do 100 Million particle fluid simulations with Phoenix FD in an overnight sims in 3DsMax but Houdini would never be able to handle more than 10 million on the same machine unless one has a render farm. I really hope SideFX tries to optimize Houdini for a single user rather than a farm of computers. That being said, RTX 2080 is still considered by many standards good GPU so I am not sure what am I suppose to do if I want to use Vellum even for smaller sims. There is no computer good enough in this world that would fit Houdini's computational needs because it's made for render farms not a
Add Comment