6/11/2023 0 Comments Atom for mac m1![]() In principle, it could even be faster to run tasks that fit better the CPU than the GPU. It really depends on the performance of the relevant code on the CPU and GPU of interest. Would that be possible in GPU resident mode, and if so, what are the performance implications? However, I’m almost considering just keeping that on the CPU with -pme gpu -pmefft cpu to avoid porting VkFFT to Metal. I’d like to focus more on debugging my nanotechnology design, rather than on which processor would simulate it more quickly. ![]() I want to rely on GPU for all simulations in an everyday workflow, so that GPU is always faster than CPU, or the simulation is so small that switching to CPU for speed is overkill. When you get to the order of 10-100 atoms, ns/day is so ridiculously fast that I’m not that bothered by it simulating with less ns/day than theoretically possible. My main issue is that for some small simulations, GPU might be unreasonably slow compared to CPU. You might even be able to try this right now, if you’re never ran such a benchmark before. Ideally, you would try a 2-atom simulation on any recent GPU. 20 fs) and divide the empirical ns/day by 10.įor (2), I was trying to judge the minimum CPU-GPU driver overhead, not GPU-side performance. Did this “world record” speed increase after you implemented GPU-resident execution, and by how much? You could use another record with (e.g.
0 Comments
Leave a Reply. |