If you wish to submit an article, please contact email@example.com for details.
Lots of good resources linked from this site by Daniele Bartolini: Data Oriented Design Resources
Data-Oriented Design book (2013 beta version) - PDF download
This is a free resource. Feel free to read, copy, download, upload, print, burn to CD, hand to someone on a pen drive, but do not claim the work is your own, or charge anyone for the right to read the material.
Unity showed an amazing presentation in Los Angeles and during the Keynote, they showed the power of the data-oriented design approach to game development with a high performance demo running on an iPhone. https://youtu.be/alZ6wmwvck0?t=6434
Unity has been doing some amazing work over the last few years, and seeing the outcome of their efforts like this is gratifying to say the least. Data-oriented design has always been a good way to move towards performance and scalability, but having a live example running in a public presentation like this will hopefully bring more people on-board the movement as a whole. Thanks to all those who have supported this site over the years, and feel free to suggest any other articles to link to on this front page.
As promised, the online resource has also been updated. It is in a new folder here so that links to the old resource still work.
Unfortunately, but due to the scope of the changes in the official first edition, the layout of the chapters has changed. If you have linked to content and want to update your links to point to the new online book, a simple update of the name in your links will not suffice. You will need to hunt down the content you wish to link to in the new document.
The content in the new Data-oriented design book has seen big changes, and though it's important to keep information free, the running the site does cost something, so I plan on making it a full price book to bring in the funds to keep the site running.
Currently, the plan is to upload the new version of the site based book, and release the paperback for sale at the same time. The online version isn't quite as pretty as the hardcopy, as the Latex to HTML I use doesn't work perfectly, but at least this way, students, and those less well off, can get the same content albeit in a less beautiful form.
I've tried to get Latex to convert to epub, but I must have some configuration files wrong, as conversions always fail. So right now, the plan is that if I get enough sales of the paperback, I will consider hiring someone to make the kindle version available.
It's been a long time coming, but the book that was put up for free as you can see here, finally has a successor. A first true edition that attempts to fix many of the issues with the previous version, and also fix the problem of not being generally available.
Data-oriented design, the book, was first started in 2010, as a collection of e-mails and blog posts. It was worked on part time its whole life. Finally, the writer has had the time and space to work on the updated version long enough that it's now ready for review prior to final release.
If you wish to be one of the reviewers of the new version, please send an-email to firstname.lastname@example.org to register your interest. Signups are over for now.
Proof PDF copies are likely to be available from the 20th of July. Final relase, physical book and epub or kindle versions are likely to be available nearer November.
Mike Acton ran a master class during Game Connection in Paris, and Jeremy Laumon (Patagames) wrote up his experience. What's nice about this is that it talks about understanding the data much more than it talks about what most people tar data-oriented design with, such as cache miss scares and memory bandwidth issues. In the workshop, Mike forces the participants to shed themselves of assumptions time after time.
It's an old article, but Amanda writes about AoS vs SoA, and presents AoSoA as an alternative that could increase data locality at both axes of the data without adding a large cost to either.
Nice to see Unity taking data-oriented design seriously. The massive difference in CPU utilisation suggests a much better tasking framework than initially expected. See here
I came across this article on writing a really really fast json parser here
There are a few links in there that take you on a great tour of optimisations if you can spare the time to follow them, including a lovely post about 50% improvement in speed in SQLITE.
I was recently shown a video by Scott Meyers on CPU caches, and the first ten minutes alone seems to be a reasonable push for understanding that DOD affects all langauges, not just C++ or C.
Link to the video https://vimeo.com/97337258 here for your viewing pleasure
Manipulating data in structure of arrays format can be unweildy for some, but this post talks about making things easier for you using some simple templating to replace the manual side of iteration through the arrays.
Read here C++ encapsulation for Data-Oriented Design: performance and learn about keeping your DOD SoA approach tidier.
It has become more obvious to people involved in optimisation that the x86 architecture is a difficult platform to understand at the core. This is partially because of the multitude of different CPUs out there that support the instruction set, each with their different timings, but also because of this latest breed of extraordinarily out of order CPUs. Knowing what's actually going to happen in an i7 has become a near impossible task.
Read Robert Graham's post on x86 is a high-level language and try to see why it's so very difficult to grok the flow of data in these chips, and also how it's very difficult to guess what will be the best performing algorithm without doing a lot of real world tests.
Nice read on why grep is quick. Some simple stuff, some awesome algorithm usage, and generally the kind of thing that you might want to keep in your head for if you come across a searching pattern that is similar to grep in any way.
I was skeptical at first, but the author appears to have tested his efforts with real hardware, which of course is a core tenet of DOD. Also this is not a post about a new invention, but a set of results from tests where the author replaces a hash table with alternatives. It's interesting to look at the different timings, but remember to test your code and not just follow blindly, as you may have overhead somewhere else that makes the slowest in these tests, suddenly the fastest.
Chose a paradigm that allows for the simplest, least complex, most provably correct code.
Here's another example of premature optimisation:
Swap data for energy, and the demand oriented approach to fulfilment changes the function used to determine fitness. With energy, the demand over time was well known, but ignored by thousands of people installing expensive hardware.
A lovely book on optimisation by Carlos Bueno (from facebook's performance team).
Find the book available for free download here
As reference material for the book, a github project has been started to show the development of a game in both the Object-Oriented and Data-Oriented approaches.
Expect slow updates right now as it only has one developer and they are in full time employment at a startup so spare time is scarce. However, if you wish to follow along, the project is hosted here on github for all to see.
In addition to the parallel game development, submissions from other developers would be appreciated, specifically any demo code that provides ways to build timings for the performance oriented points of the book. For example, any code that could be used to directly show the impact of bad pipelining, bad cache alignment, or even the effects of write combining. The only rule will be that it has to be simple, and able to run on many platforms. Single platform statistics aren't much use unless they are targetting current trending hardware like ARM based CPUs.
When traversing objects stored on an intrusive linked list, it only takes one pointer indirection to get to the object, compared to two pointer indirections for std::list. This causes less memory-cache thrashing so your program runs faster — particularly on modern processors which have huge delays for memory stalls.