Alpha Beta Pruning Algorithm

Nov 26, 2012 at 9:43 AM

I'm working on a project for my class and I've decided to undertaken parallelizing the Alpha-Beta for Reversi using CUDA. I'm using this person's project as a base:

http://reflectivecode.com/2008/06/winothello

The idea I have to parallelize the algorithm is to have the CPU create the root node, from there create a left child representing the first move then pass the n-1 remaining moves to each block in the GPU. From the node containing the first move we repeat the same procedure by playing another move and sending n-1 moves to the GPU.

Only issue I have is that I've noticed I can't pass objects in CUDA and I have to make the Move, staticEvaluator and a few other functions labeled as __device__ . So far it's looking a bit messy and I'm stuck. Any tips or suggestions would be greatly appreciated, thanks.

Coordinator
Nov 27, 2012 at 11:24 PM
Edited Nov 27, 2012 at 11:24 PM

Hi,

I’m not familiar with this algorithm, so I can’t directly help you to port it to CUDA. But regarding the rest: All the data you want to pass from C# to CUDA must be either a native (and blittable) type (float, int, etc.) or a struct. Meaning, if you create an array of structs or native blittable types you concatenate the actual data and not references to your data (somewhere in memory) as would be the case for classes (value type vs. reference type). But you can also define methods inside a structure (executed on CPU only); the only difference is how they are handled in memory: You can therefore pass objects, as instances of structs, to CUDA, but not instances of classes.

To create CUDA kernels, you need to create a *.cu source file and compile it with nvcc to obtain a *.cubin or *.ptx kernel image you can load with managedCUDA. The kernel itself is labeled __global__ and other functions you want to call within the kernel get the label __device__.

Hope this helps a little ;-)