Hello, I have a program which uses openmp to schedule work in parallel to one opencl device i.e a gpu. This is done right now by using multiple contexts and which have there own unique queues and buffers. The program stops after some iteration steps. I mean it just stops, without exiting or segmentation fault or something. Could it be that the allocation from multiple contexts is not thread safe? Do I have to use one context and a queue for each thread (which is my choice for the future anyway)? Btw. this only happens on a GPU device. CPU devices work fine.
Thx in advance.
Multiple threads operating on a context is supported from OpenCL 1.1. All OpenCL calls are thread-safe except "clSetKernelArg". Even with this API, multiple threads can still work with unique cl_kernel objects. However, they cannot wok with the same cl_kernel object at the same time. So, per-thread allocation of "cl_kernel" object will help overcome this issue.
Check Appendix A.2 of OpenCL Spec. So, as long as your platform is OpenCL 1.1 or later, you can use just 1 context and allow all your openmp threads to work.
However, if multiple threads are reading/writing shared "cl_mem" objects across multiple command queues -- then this can result in undefined behaviour. Check Appendix A.1 of the OpenCL Spec. That will help resolve all your doubts.
Now coming to the issue you are facing,
I am not sure what you mean the program stops...but no seg-fault. You may want to first find out until which point the application is running. (or) Please post your sources as a standalone zip file which we can use to reproduce here.
You need to also specify the following:
1. Platform - win32 / win64 / lin32 / lin64 or some other?
Win7 or win vista or Win8.. Similarly for linux, your distribution
2. Version of driver
3. CPU or GPU Target?
4. CPU/GPU details of your hardware
The application is a Vortex-Particle-Flow simulation with immersed boundarys. I cannot post the sources here. I'm working with Ubuntu 12.10 x64 with AMD APP 2.8 and the latest catalyst beta driver 13.2. The GPU is a HD 7970 and for this target I'm facing these problems. The simulation iterates over many time steps and advances the flow. Since the simulation is 2D based we run 2D slices in parallel. Right now every slice has a unique cl_context, buffers and kernels, but utilizises the same device. Now on the gpu the program literally just stops. It doesn't exit and the memory is still allocated but is simply doesn't do anything. It only happens for the gpu and the debugger doesn't react when I want to break. When I use the CPU device it runs without problems. I will try now an older driver version and get back to here.
CodeXL enables profiling. When profiling is enabled, there are no asynchronous operations (like async DMA Xfer). Possibly this is affecting (just a guess)
Also, Just try creating the command queue with "CL_PROFILING_ENABLE" and see if it can work correctly on an independent run (without CodeXL)
In any case, this looks like a bug to me.
If i am not asking for too much, Can you try the earlier driver (12.10) and see if it works.
Then, we can isolate this to a driver problem.
Also, a repro case is going to really help us solve this problem. A quick small repro case would be very useful. Thanks,
Update: Still no simple example which could reproduce the problem. But I tried to run my code via ssh. When I am already logged in to the system (having the desktop opened) and then run the code remotely it stops again. But if I'm not logged in locally and then start the code remotely it doesn't stop and I can log in locally afterwards and it doesn't stop any more. This is not perfect but since I want to set up a GPU workstation it is a workaround because then I don't need local access.
It has been a while, but my problem still exists. My above responses weren't accurate because the remote access didn't use the GPU but only found the CPU. The classic healess problem. I am now able to access the GPU remotely but then there is my "stopping" problem again. I believe a deadlock is happening when releasing a memory object in the multiple command_queue called by multiple threads scenario. Here is a part of my debug log taken when the execution stops:
debug]#0 0x00007ffff582d420 in sem_wait () from /lib/x86_64-linux-gnu/libpthread.so.0 [debug]#1 0x00007fffef1f9ba0 in amd::Semaphore::wait() () from /usr/lib/libamdocl64.so [debug]#2 0x00007fffef1f6162 in amd::Monitor::finishLock() () from /usr/lib/libamdocl64.so [debug]#3 0x00007fffef21f6fc in gpu::Device::ScopedLockVgpus::ScopedLockVgpus(gpu::Device const&) () from /usr/lib/libamdocl64.so [debug]#4 0x00007fffef242c3e in gpu::Resource::free() () from /usr/lib/libamdocl64.so [debug]#5 0x00007fffef243207 in gpu::Resource::~Resource() () from /usr/lib/libamdocl64.so [debug]#6 0x00007fffef22fd3d in gpu::Memory::~Memory() () from /usr/lib/libamdocl64.so [debug]#7 0x00007fffef23123f in gpu::Buffer::~Buffer() () from /usr/lib/libamdocl64.so [debug]#8 0x00007fffef1e8998 in amd::Memory::~Memory() () from /usr/lib/libamdocl64.so [debug]#9 0x00007fffef1e9607 in amd::Buffer::~Buffer() () from /usr/lib/libamdocl64.so [debug]#10 0x00007fffef1f41eb in amd::ReferenceCountedObject::release() () from /usr/lib/libamdocl64.so [debug]#11 0x00007fffef1c5a37 in clReleaseMemObject () from /usr/lib/libamdocl64.so
I will try to finally reproduce this by focusing on threaded allocating and releasing memory in a minimal example. Hopefully this is leading somewhere. It would be nice to solve this to convince my boss to by some of the 7990 cards for our computing.
Did you get any further with this? I'm pretty sure I have the same problem (I don't think it's happening on CPU)
10 queues (one per thread)
20 kernels (one cl_kernel is instanced every use for each thread so they're not shared)
I'm blocking all writes and reads, and blocking my executions with clWaitEvent immediately after clEnqueueNDRangeKernel.
(Things seem to hang much earlier if I don't block everything, but I'm not sure yet if it's the same issue)
The faster my code works, and the more work I throw at it, the quicker it hangs. (more memory object allocation/deallocation)
Whenever it stops (just as described above) one thread ALWAYS just happens to be releasing a memory object (the others are usually reading/writing)
I understand the object release is threadsafe... (I'm doing it VERY regularly, say, 10 times per kernel, per thread)
In my case should I have *any* mutex's? I don't currently other than for some management on the host side.
Windows 7, driver version in device manager is 22.214.171.124. (I think I'm still using beta drivers)
For my problem, I think this is the source:
Still, to this point I was not able to reproduce the Problem in a simple example but I also don`t have much time to invest in this. Anyway the since the problem only occours with the AMD GPU runtime it seems to be driver related. It happens either if I have one context created by the main thread and accesed by multiple different threads or if I have multiple contexts created by the main thread and accessed by multiple different threads. Note also that in the latter case no shared memory objects or kernels are used at all.
I realised my image-memory objects weren't using the correct queue (all were using a "default" queue which the kernels weren't using), not sure why the system still worked, but this may be the cause; not that the hang/deadlock was related to any memory objects or kernels that were using the image objects at the time.
I added a host-side mutex when releasing memory objects, no help.
I then used that mutex when reading/writing to any memory object, where I then discovered my issue with image-memory-objects.
I'll update shortly if my problem has gone away, but currently my driver crashes before it hangs (though it's running for a lot longer) which I think is a OOB memory access as it gives me a memory violation when I execute on CPU instead of GPU
Okay, I wrote a big long winded reply, but I think I've solved my problems now. No more access violations (which I thought were out of bounds access) and no deadlocks. (or at least, not for the last few hours) on GPU or CPU.
My new setup;
100 host threads, 100 queues (one each)
N kernels, all instanced per-thread (no cross-queue/cross-thread kernels). All kernels on a thread use the same queue.
All writes are now non-blocking
All executions are non-blocking.
All reads are blocking.
[then kernel and data is disposed]
My problem I realised in the end was whenever I made a write or execution non-blocking, was that the data on the queue for that kernel wasn't ready. PERHAPS more threads & queues just highlighted a problem that was there, or I read somewhere about having more than one queue for a context warranted more clFinish's. (clFinish before execution also worked, but clFlush still resulted in access violations)
Anyway, now, for all my non-blocking writes I store the cl_event...
Before execution (though after clSetKernelArg) I do clWaitForEvents on all the events relevent to this kernel/queue.
All my crashes and deadlocks have gone away. I have NO mutex's host side related to opencl and execution is faster.
I wrongly assumed an execution (blocking or non-blocking) would ensure the relevent data-write[s] on the queue would be finished, but it seems not.
From what I infer from your post, the bug was due to your misunderstanding of asynchronous execution and nothing to do with AMD's opencl run-time. Please confirm.
And yes, Good luck and Thanks for taking time to post your experience here!
It can be a great time-saver to someone...
And,. I hope your code runs for many more hours to come and then one day terminates normally...!
Does all these mean that I can do operations below?
1)Create single context.
2)Create single oredered queue for all kernels.
3)Create an oredered queue for each write/read operation. So if I have N read and M write operations, I create N+M queues.
5)From an openmp body, simultaneously do :
5) All writes/reads are done so I can start computing on the gpu:
6)Do very similar thing for reading the results as step 5
7)repeat from 5
This way, can I get full pci-express read/write bandwidth?
Right now Im using only a single ordered queue for all read/write/compute operations and I have a single singleQueue.clFinish() at the very end. This makes me able to use only 1.4 GB/s for read/write buffer operations. I'm kind of hoping 4GB/s - 5GB/s for my gigabyte 990-xa-ud3 motherboard.
Thank you, now I have a problem, one of the buffers are not read when doing concurrent reads. Codexl shows holes. 3 simultanous reads instead of 4. But rarely. Maybe drivers are bugged? All give CL_SUCCESS.
Edit: just tested example. Clenqueuemapbuffer shows 6.5 GB/s so I should exchange clenqueuewritebuffer with clenqueuemapbuffer. (uses DMA maybe?)
CPU read shows 1.3 GB/s which must be same thing of clenqueuereadbuffer and slowness of my implementation.
Noe: even AMD examples make CodeXL throw lots of leak errors. I think codexl makes a lot of false positives.