- Are there dependencies between tasks?
- Does the result of the operations change if you change the execution order?
- Will there be contention for data or some other resource?
- Will we get a return on investment for the overhead of configuring and running in parallel?
- Do we have a significant number of tasks to run? We should have more tasks to run than we have cores.
I’ve been reading Neal Ford’s The Productive Programmer. It’s a good read so far. I’ve picked up a few tools that have already been helping. Pick up the book for the details on the reasoning behind having/using these utilities.
Virtual Desktop Manager VirtuaWin this lets you have multiple virtual desktops. I’m using one for communication apps, one for development, one for research and one for miscellaneous stuff.
Clipboard Manager Tool Ditto. So far, I’m not very fond of the UI but I like the way it works. Let’s you work with a clip board stack across all of your apps. I used it to clip all of the URLs for this article and then paste them in appropriately.
Clipboard Manager that is a Windows Gadget. Lots of background choices. It’s a Windows Gadget so it’s always visible. However, I’m not liking the way it works. I’ll probably drop it. Just thought I’d give it a whirl and thought gadget lovers might go for it.
Cygwin for bash, *nix-ish utilities, vim, Emacs, wget, curl, etc. Keep around the downloaded executable. It’s really a loader and not an installer per se. You’ll also have a download folder that accompanies the loader. So, you may want to pre-plan where you’re going to keep that before you run the loader.
A lot of people talk about code smell, what it is, how to detect it and anti-patterns. However, the thing that struck me today was this: I was out mowing the yard. The temperature was like a billion degrees, it was humid and I was mowing uphill no matter which way I turned! So basically, it was hard work. The thing that struck me was that I hit a section of air that smelled like a fragrant fabric softener. It really perked me up. I felt rejuvenated and mowed with more gusto. I looked forward to returning to the portion of the yard that had the scent in the air.
That got me thinking about code smell. Besides code taking us more time to work through because of quality issues, there is also the issue that it smells! It is repulsive, it is repugnant, it is de-motivating! Who wants to go work on the smelly code? No one wants to do it. It is like being told to retrieve a book that is in a library that is located in the middle of a landfill. No one wants to do that. I think there is a psychological aspect of smelly code that should be considered. That is, people want to stay away from stinky code.
On the flip side, are you excited when you get to go work in clean code? Code that is well factored, code that uses patterns, code that you can understand, code that is a joy to work with. I know that I can honestly say that if I have an application to work on and I know that it exhibits the qualities above, has a good design and has a reasonable amount of documentation I get excited about working on it.
Also, when I’m working in clean code I don’t want to be the one that dirties it up—regardless of who wrote it.
In conclusion, I would counsel that messy code besides having quality issues is also de-motivating. Clean code, good smelling code is a joy to work with. Strive for clean code.
What is GPGPU computing? GPGPU stands for General Purpose computing on Graphics Processor Units. The basic idea is that we use the graphics card to do some computing. Today most computers have 2 cores and some as many as 6. With hyper-threading we may see what appears to be double that many processors. However, a video card such as the NVidia 480 has “cores” or processors.
Portable or Desktop?
If you choose a portable machine you will be able to take the unit with you for presentations, research or on the go development. However, you will typically sacrifice in the power or number of cores that the video card has available. For instance a laptop with a GT330M has 48 cores. A desktop with a GT330 has 96-112 cores.
You’ll also likely have a much harder time finding a suitable laptop. When choosing a desktop, go for a mini-tower case or one that can be purchased with the appropriate video card. Current video cards often take additional power from the power supply and require additional ventilation. For instance, you could purchase an HP HPE-380T with an NVidia GTX260 with 1.8G of DDR3 RAM and 192 cores!
ATI or NVidia?
NVidia is known for its CUDA entry. ATI has ATI Stream computing. Both companies now have drivers for OpenCL. I chose to go with NVidia because I was able to find my resources for their products. ATI Stream is fairly new.
Whichever you choose, you will need to know what type of computing/gaming you plan to do and if there is existing support for that. Also, you will need to make sure to check the manufacturers pages to see if the video card you have selected is supported for GPGPU use.
In general, the more cores or shaders the better. Also pay attention to how fast they are running.
What type of memory? In general you want to stay away from shared memory. This makes use of the slower system memory and steals resources from the PC. Look for DDR3 or DDR5 memory. Sometimes they are prefixed with a G like GDDR3. In general DDR5 is faster.
How much memory? After consulting with a professor with experience in the field he suggested 512M minimum. If you can get more that is preferred.
Sony CW, F and Z series
Because I wanted to be portable and have the minimum 512M with an NVidia card, I considered the Sony CW, F and Z series computers.
However, these machines are new enough that the released video drivers do NOT support OpenCL. I did find that there is a way to modify the released NVidia drivers to work on them.
After using the information above, I was able to run the Particles demo from here:
However, the hardware identification and mandelbrot set generator on that site don't work.
I also downloaded and tested a CUDA based mandelbrot application that was listed on the NVidia featured applications site. It ran nicely.
This summer I will start a journey into Computational Transportation Science (CTS), Intelligent Transportation Systems (ITS) blended with parallel, distributed and agent computing.
I will specifically be exploring OpenGL and OpenCL. One of my major objectives is to explore the use of the Graphics Processing Unit (GPU) in “general purpose” computing (GPGPU). Specifically, I will be utilizing the NVidia GT330M GPU.
Along the way I’ll also likely explore other environments such as JOMP, JPPF and MapReduce. I’m also planning to use Neo4j for persistence.
It is my intent to model some non-signal controlled intersections this summer. Long term I will explore coordinated signal controlled intersections.
As part of this exploration I will also seek input and insight from those in the industry and in academia so that I can avoid reinventing the wheel and stand on their shoulders.
Forthcoming posts will include specifics about hardware selection, and software installation.