An NLogN 2D Polygon Splitting Algorithm

I needed to come up with a solution to help designers authoring 2D polys over flat surfaces directly in Unreal Engine 4. I was also asked for a quick way to split up an authored polygon into multiple ones.

In a nutshell, designers wanted to define a 2D polygon, like the one in figure 1 but also have the ability to define internal segments:

Figure 1 – An example of a (split) polygon

I solved the authoring part by extending the UE4 spline component. However, I needed to keep the splitting part simple for both me to implement and for designers to use. I didn’t have much time and I had to find a quick solution that gave the best bang for the buck. I came up with two options:

  • Use one or more splines to partition the polygon.
  • Add another type of vertex to the spline component and use it as an “internal vertex”.
Figure 2 – Option one using splines on the left, option two using internal vertices on the right

The first option seemed appealing as it gave designers more authoring freedom. However, they also had to add more work creating (and authoring) more than one spline and on my end I had to write quite some boilerplate code to support that.

The second option seemed a hack but adding a second type of vertex to the spline component (actually to its metadata) seemed easy to do. And for designers it was definitely the easier option as well.

Maths is fun (really!!!)

After deciding to go with option #2 the only “bit” left was to write a 2D polygon splitting algorithm that given a convex or concave polygon and a list non intersecting segments spitted out a list of sub polygons indices.

Let’s take a look at a concrete example figure 3 and let’s (for fun) define the problem mathematically: $I$ represents the indices of our polygon and $S$ the pairs of indices that split the polygon.

Figure 3 – A poly to split

$ I = \{ k : k \in \mathbb{N} \} $
$ S =\{ {a\brack b} \in I \} $

so we have in figure 3:

$ I = \{ 0, 1, 2, 3, 4, 5, 6, 7 \} $
$ S = \{ (1,7), (1,6), (1,3), (1,4), (6,4) \} $

The expected outcome would be something similar to this:

$ T = \{ (1,7,0), (6,7,1), (6,1,4), (1,2,3), (1,3,4), (4,5,6) \} $

I have to admit that the first algorithm I came up with was just plain inefficient. It only needed to run in the Unreal Engine editor and surely designers weren’t going to create polys with hundred of thousands of vertices, so it turned out to be fine in the end.

I couldn’t live with it

Despite the algorithm wasn’t the fastest (although it worked), I kept thinking about at how inefficient my implementation was. Just to give you a hint, the time complexity was something like $ \theta (n^3)$, let alone space complexity which I’d like to omit if you don’t mind. Let’s pretend the following solution was the first I came up with.

After spending some time thinking through it, I came up with a better solution that runs in $ \theta (N log(N))$ time. It’s not super fast…but surely better than cubic time complexity!

Another one bites the maths

First of all we need to order our segment list S by the min absolute index difference. That is:

$ Min( Abs( a_i – b_i) ) $

Our set $S$ then becomes:

$ S = \{ (1, 3), (6, 4), (1,4), (1, 6), (1,7) \} $

Sorting an array isn’t too expensive (depending on the sorting algorithm of course). As an alternative a priority queue could be used instead, so elements are sorted upon insertion. Taking elements off a priority queue is not constant but it’s relatively cheap: $ \theta(\log n )$.

Now for each index pair in the sorted list $S$, let’s add the indices in between from $I$. Formally:

$ \forall {a\brack b} \in I \rightarrow P = \{ a_i, b_i – 1, …. , b_i – k, b_i : a_i < k < b_i : k \in I \} $

$ I = I – \{ b_i – 1, …., b_i – k \} $

If for any reason $ a_i, b_i $ are not in $I$ then we have intersecting internal segments. Our final list $T$ is going to be the union of P and what’s left in I.

$ T = P \cup I $

First segment in $S$ is $(1, 3) $ . We need to take all the indices still in $I$ between 1 and 3. In this case it’s going to be index 2 only and our first is $ (1, 2, 3) $ .

$ P_1 = \{ 1, 2, 3 \} $

Now we need to remove index 2 from $I$.
$ I = \{ 0, 1, \cancel{2}, 3, 4, 5, 6, 7 \} $

The second segment is $(6, 4)$, the index in between is 5. Let’s apply the same logic:

$ P_2 = \{ 6, 5, 4 \} $

Now remove 5 from $I$:
$ I = \{ 0, 1, \cancel{2}, 3, \cancel{5}, 6, 7 \} $

The third segment is $(1,4)$. Now, the indices in between are 2 and 3, but 2 is no longer in $I$ so we can use index 3 only. Which makes our third poly:

$ P_3 = \{ 1, 3, 4 \} $

And let’s remove 3 from $I$:

$ I = \{ 0, 1, \cancel{2}, \cancel{3}, \cancel{5}, 6, 7 \} $

And finally segment $(1, 7)$. Indices in between are 2, 3, 4, 5, 6, but only 6 is left within this range in $I$. Thus:

$ P_4 = \{ 1, 6, 7 \} $

$ I = \{ 0, 1, \cancel{2}, \cancel{3}, \cancel{5}, \cancel{6}, 7 \} $

So we only actually have $ (0, 1, 7) $ left in $I$ .

Now the last bit, we need to merge $P=\{ P_1, P_2, P_3, P_4 \} $ with what’s left in $I = \{ (0,1,7) \} $:

$ P = \{ (1,2,3), (4,5,6), (1,3,4), (1,6,7) \} $

$ T = \{ (1,2,3), (4,5,6), (1,3,4), (1,6,7) \} \cup \{ (0,1,7) \} $

$ T = \{ (1,2,3), (4,5,6), (1,3,4), (1,6,7), (0,1,7) \} $

Show me the code!

Ok ok…if you’ve had enough of this math nonsense so have I. Let’s put together a simple C++ script that does the job. Let’s first define the struct Segment, which contains the pair of indices to split the polygon and the comparison operator greater than and less than.

struct Segment
{
	int32_t indexFrom;
	int32_t indexTo;

	bool operator>(const Segment& other) const
	{
		return fabs(indexFrom - indexTo) > fabs(other.indexFrom - other.indexTo);
	}

	bool operator<(const Segment& other) const
	{
		return fabs(indexFrom - indexTo) < fabs(other.indexFrom - other.indexTo);
	}
};

Now the struct Poly that contains a list of indices

struct Poly
{
	std::vector<int32_t> indices;
	Poly()
	{
                // let's initialise the indices
		indices.reserve(6);
	}
};

and now the main function that does the work. I decided to use an inverted priority queue, that is a heap where the root is the smallest element.

bool find_polys(std::set<uint32_t>& indices, std::priority_queue<Segment, std::vector<Segment>, std::greater<Segment>>& segments, std::vector<Poly>& polys)
{
        // we know there are going to be S + 1 polys
	polys.reserve(segments.size() + 1);

	while(!segments.empty())
	{
		const Segment& segment = segments.top();
		polys.push_back(Poly());
		auto& polyIndices = polys.back().indices;
		int32_t indexFrom = fmin(segment.indexFrom, segment.indexTo);
		int32_t indexTo = fmax(segment.indexFrom, segment.indexTo);

		polyIndices.push_back(indexFrom);

		// if segment indices from and to have already been removed then we have intersecting segments!
		auto from = indices.find(indexFrom);
		if (from == indices.end() || indices.find(indexTo) == indices.end())
		{
			return false;
		}
		from++;
                // let's add indices in between from and to from I
		while (from != indices.end() && *from < indexTo)
		{
			polyIndices.push_back(*from);
			from = indices.erase(from);
		}
		polyIndices.push_back(indexTo);
		segments.pop();
	}
        // anything left in indices should be copied over to last poly in the list
	polys.push_back(Poly());
        // this should be the equivalent of a memcpy
	std::copy(indices.begin(), indices.end(), std::back_inserter(polys.back().indices));

	return true;
}

Asymptotic Complexity

It seemed to me the time complexity is $ \theta (N log(N))$ where the linear part is iterating over the segments and the logarithmic bit is for both adding the indices in between (and popping out the segment of the queue). I didn’t trust myself though so I installed google benchmark and wrote some code to stress test find_poly. I was not very familiar with it before, but it’s a well established library from google to benchmark code snippets and it also calculates asymptotic complexity! Exactly what I needed 🙂

For the benchmark I tested the function with with an increasing number of indices from 10 to 100,000. For the test I generated automatically the segments between the indices so that the polygon would be fan triangulated.

This is the benchmark code:

static void BM_find_poly(benchmark::State& state) 
{
	int numVertices = state.range(0);
	int numSegments = 0;
	for (auto _ : state)
	{
		std::vector <Poly> polys;
		state.PauseTiming();
                // let's build the input
		std::set<uint32_t> indices;
		std::priority_queue<Segment, std::vector<Segment>, std::greater<Segment>> segments;
		generateSegments(numVertices, segments, indices);
		numSegments = segments.size();
		state.ResumeTiming();
		find_polys(indices, segments, polys);
	}
        // set the number of elements
	state.SetComplexityN(numSegments + numVertices);
}

BENCHMARK(BM_find_poly)->Unit(benchmark::kMicrosecond)->RangeMultiplier(10)->Range(10, 1E5)->Complexity();

BENCHMARK_MAIN();
Figure 5 – Yes I called my PC Thanos…

And boom, I was right! yay! My implementation is indeed $ \theta (N log(N))$! I didn’t trust google benchmark (but I should have) so I plotted a graph in Excel using the output data:

Time complexity is in fact NLog(N)!

Conclusions

While it’s not an algorithm I’d personally employ at runtime in a game, it isn’t too slow to be used in an editor ( 32ms to process 100,000 vertices and 99,997 segments) considering I tested it on an old CPU from 2013.

I hope you have enjoyed the article, and you liked my 2D polygon splitting approach. If you manage to come up with a more efficient solution get in touch and let me know!

The Electronics Tales of an Ingenuous Geek

The Beginning

A few months ago, while I was enjoying breakfast with a friend of mine, we happened to talk about how it would be cool to build a video game console from scratch. He mentioned a book called “The Black Art of Video Game Console Design” and how much he enjoyed it during college. He also added that I should probably get it if I wanted to take on the challenge.

I still cannot explain to date why I did it. A few days later I purchased it while ingenuously looking forward to an easy and interesting reading.

The Mistake

When the three kilograms, one thousand pages book arrived I then realised the mistake I made. It was surely interesting…but not easy at all! It was all about electronics, circuit analysis, simulation and PCB design. In a nutshell what the title said: the art of building a console using electronics components. Only God knows what I was thinking and why I kept listening to my friends.

What do you think any sane person would have done? Thanking the 30 days return policy and give it back, right? No, I decided to keep it and to study it instead. I put aside Ghost of Tsushima, which had just been released, and I started an unexpected journey into something I didn’t see coming.

Very little I remembered from my undergraduate degree. Luckily the book starts easy, giving the reader some general notions: how electrons are moved by a potential difference, the difference between AC and DC, the definition of an Ampere, Coulomb, Joule, Volt and so on.

While I was progressing through the chapters, something about the subject struck a cord in ways I cannot explain. The more I learned the more I became eager to study.

Honestly, who does care nowadays how a transistor, a diode, a capacitor or a resistor work? And yet they are the building blocks of any CPU, video game console, mobile phone, TV and anything really these days.

Hello World!

With some initial and renewed knowledge I got a breadboard, a construction base for prototyping of electronics, a resistor and a light emitting diode or LED and did this:

The hardware equivalent of Hello World! Even if it was something simple I was so happy!

“I can do this” I thought. After that something changed. I was thrilled, excited and eager to discover, learn and experiment more. I kept on reading the book and when I completed the chapter about transistors and how to build simple gates I bought a bunch of them and tried myself!

The Inverter Gate

Getting how transistors work is easy. They are essentially a switch controlled by an input current. However setting them up and understanding their different properties and which one to use depending on the use case can take some time.

After some initial tinkering and trial-error I built my first inverter gate:

Pressing the tactile switch sinks the current to ground and the LED turns off.

The excitement took over this time. It was time to build something bigger and more challenging (very much like programming isn’t it 🙂 ). Space on a breadboard is limited and in order to build something more complex I needed to tackle the problem from a different angle. It was time to go back to reading mode!

The Answer

I found my answer after I finished the chapter about TTL (transistor-transistor logic) chips. An entire family of integrated circuits, called the 7400 series and introduced for the first time in the late 60s. It contains hundreds of devices that provide everything from basic logic gates, flip-flops, and counters, to special purpose bus transceivers and arithmetic logic unit. They were super expensive back then but now they are extremely cheap.

While I was surfing the web for for ideas on what to do with them I came across a cool and interesting video series made by Ben Eater on how to build an 8 bit CPU using breadboards only and the 7400 chips. Of course, building a CPU would be amazing!

And there I had it, the perfect project for my electronics needs.

The SAP 1

The CPU is based on the SAP 1 (simple as possible) as explained in Paul Malvino’s book called “Digital computer electronics” from 1976.

sap 1

In the first four videos Ben explains how to build a clock module. He’s really good at breaking down complex topics into something easy to understand. I’m not ashamed to say that I must have watched those four videos a hundred times. Despite them being easy to follow, there’s a lot of knowledge to be digested, especially for a beginner.

I didn’t just want to follow a guide on how to build a CPU step by step. I wanted to deeply understand it. Period. I needed to know where to place ICs, how to wire them and why.

The Clock Module

Many hours, videos and articles later I finished the clock module with some of my own modifications from Ben’s version.

The clock module

It felt fantastic. After all the effort and hard work the clock module was working. The only problem left was…well it was just a clock module! I actually needed others if I wanted to make something useful with it.

The Untold Truth

The real challenge though began when I added more modules to the build. I later found there’s an untold truth with this project: even if you follow exactly the videos and do what Ben does, you’re very likely going to bump into issues unique to your build, setup and components you use and of course mistake you make.

I built the registers A and B and eventually the instruction register. It was all good until I connected everything together. I turned it on and after a few seconds the LEDs, all of them, started flashing in a random order, very much like a Christmas tree. I triple checked that I connected pins correctly, and when I couldn’t find a solution I searched online.

Troubleshooting

This was my first lesson learned in the electronics world. Like in software, if you have a bug you don’t google the specific issue you’re having. You debug your code, you try to replicate or isolate the issue until you find and fix it. If you google: “My circuit flickers like a Christmas tree” I bet you won’t find many useful results (Disclaimer: I actually googled it)

At that stage I realised I was not able to troubleshoot/debug my circuit and I actually didn’t have any tools for it. In particular:

  • A multimeter: a nice tool to measure current and voltage.
  • An oscilloscope: a handy but expensive tool to graph the voltage change over time
  • A logic analyser: super handy and also very expensive tool to verify the logic of your circuit.

Buying all of them was too expensive and probably unnecessary. I bought a cheap multi-meter and I spent some time figuring out how to use it properly. YouTube helped a lot with that.

I troubleshooted module by module until I found the bug. I had inverted the connection to the +5V pin with the ground pin on one of the registers!

Troubleshooting in progress

With the instruction register finally working it was time to move on and build the ALU!

Along Came the ALU

It was fun building the ALU. I managed to complete it without watching the videos. I read about 2’s complement though in the book and also watched Ben videos on the topic.

And of course I was punished for not reading the datasheet correctly…I had inverted the connection of the least with the most significant bit. In fact the result was the reverse of what I would expect.

Well done me! Not only was it reversed but it was wrong too! 11 + 11 is actually 110 not 111.

I unplugged all the wires and started again from scratch. After some more tinkering I completed the ALU. This time it worked fine:

111 – 101 = 10

The Frequency Counter Module

With the clock module, the registers and the ALU working, I started putting together the RAM which is still a WIP. I have been working in parallel on another module to measure the clock module frequency. In the 90’s PCs had the two digits 7-segment display showing the CPU frequency in megahertz, and I thought it’d be cool to have something similar.

aaah the good ol’ days

This is what I came up with and guess what…it’s a flawed design (well done me again \o/ )

Unfortunately it’s not precise at all. I used the 555 timer instead of a quartz oscillator. A frequency of 1Hz is detected as 0 or even 3 hz which is pretty bad. As the frequency goes up the error increases thus the module is totally useless. I need to explore alternatives, but this is what happens when you don’t follow any guide, you mostly don’t know what you’re doing and you explore and play with electronics. But it’s fun nonetheless even if you don’t get it right the first time!

The Best Worst Mistake ever Made

I can’t believe I fell in love with electronics so much. Over the last few months I’ve learned a lot and I’m still doing so everyday. Buying that book about video game console design was the best worst mistake I’ve ever made 🙂

That’s it for now. When I finish building the RAM and the control unit I’ll write a follow up article. When I have time I’ll write another one with all the challenges and the technical pitfalls of this project.

I hope you enjoyed the article, see you soon!

My four mistakes with Vim

I have always been curious about using Vim. The legend says you can write/edit text at the speed of thought. As a coder I have always been intrigued by that, and hoped to unlock the “code at the speed of thought” trophy by mastering it.

A few years ago I tried and installed Vim using a Visual Studio plugin called VsVim, which emulates most of Vim commands. I wasn’t very familiar with any of it or Vim’s modal nature so I naively thought I could easily pick it up just by using it.

The experience was awful.

I couldn’t get anything right, I was completely confused by Vim’s modal nature. Instead of writing code at the speed of thought I wasn’t actually writing anything that made sense whatsoever. I then tried to kill Vim with fire.*

*Vim was still fine after…I wasn’t..

Mistake #1 – I completely underestimated Vim

Curiosity was still there when a few months ago I decided to look at Vim again. It was difficult to grasp if Vim was still worth learning in 2020 with a lot of IDEs and plugin out there. However the idea of “writing code at the speed of thought” was still in my mind. Unfortunately, no matter how many articles I read or videos I watched about Vim I couldn’t really tell whether Vim was worth my while or not. I then figured there was no other option but to learn Vim myself if I wanted an answer to that question.

I also asked a few friends from different coding backgrounds (DevOps, C++ games developers, Javascript, C# and Python) what their opinion of Vim was. To summarise the feedbacks I got, except from devops, is that Vim is a sort of hacky text editor for nostalgic old-fashioned Unix users who like to use terminals so that they can call themselves hackers. If you think my friends are horrible you’re probably right and I should keep my dev questions for Stackoverflow or Reddit from now on.

Anyway…amused and a little concerned, I kept on reading articles and watching videos on Vim regardless.

Without some background, it can be very daunting to navigate around Vim. Luckily there are a plethora of articles, videos and books on the topic. Practical Vim is a very good book, I used it (and still reading it) to learn about Vim and I recommend it as it contains a lot of examples and it’s easy enough to read. A fun way to learn Vim if you are into action-adventure type of games is to play an online game based on Vim’s keyboard shortcuts called Vim Adventures. It’s the “Zelda meets text editing” game, I’ve been playing it, it’s fun and polished (but it’s not free).

At first I was actually very slow at doing anything but got better after a couple of weeks. Editing and navigating code was a lot quicker than I imagined. Any change I needed to make had become much simpler. The more I practiced and learned about Vim the faster I become at getting my way around. Vim’s ability to navigate and modify text is unrivalled.

I realised I needed to reach for the mouse less often and noticed to spend a lot more time with my hands on the keyboard. My wrist has been thanking me ever since.

Mistake #2 – Vim doesn’t let you code at the speed of thought

To my regret, after two months of using Vim, I realised I wasn’t any faster at actually writing code. I was definitely faster at editing, refactoring but in the end it was actually very dumb of me thinking I could improve as a coder just by learning Vim commands.

I’ve actually come to the conclusion that if one is able to code at the speed of thought with Vim it probably can do so without it. However if you’re that kind of dev you‘re probably a dev unicorn, please do get in touch I want to capture and study you.

My initial motivations were wrong but I’ve learned something new and I don’t regret it 🙂 Learning Vim is like learning a programming language in itself and it’s extremely rewarding.

It was incredible to acknowledge that very little I remembered about the position of the keys on the keyboard. Despite being able to type without looking I was still not able to reach for a single key directly. For example, using simple Vim commands like h j k l (left, down, up and right) to move around in normal mode forced me to look at the keyboard at least once to position my fingers.

Mistake #3 – I thought I was a touch typist but I clearly wasn’t

Even if it sounds dumb, I always ignored why the letter f and j had small bumps. If you, like me, don’t know why, their purpose is to help you feel them with your index fingers so that you can position your other three fingers on the so called “Home Row” without looking. From there you can then reach for the other keys.

Home Row in a Qwerty layout

Using Vim made myself a better typist (not necessarily a faster one) and has brought to the surface my flaws with the keyboard. I’ve developed a better relationship with it, and I now believe a keyboard is for a developer what a lightsaber is for a Jedi.

Finally, I’ve been able to answer the question: ”Is Vim still a relevant tool to learn in 2020?” The answer is: unless you’re a dev unicorn, a fast touch typist, someone who can edit, write and code at the speed of thought already then yeah learning Vim is valuable and something to consider.

Vim is a time sink though. You’ll spend countless hours. Vim is very much like the English language: “easy” to learn but hard to master. I would recommend to use a plugin inside of your preferred IDE rather than using the original text editor. Don’t get me wrong, Vim has thousands plugins that can make it up to an equivalent modern IDE. However I find it’s better to use an IDE with Vim extensions but I guess it’s very much a matter of personal preference.

Mistake #4 – Why didn’t I learn Vim earlier?

I hope you enjoyed the article and let me know what you think in the comments!

A Brief Introduction to Docker

Only a few days ago I was looking into different solutions in order to migrate this coding blog to a different host provider. There are some changes I would like to make but instead of messing around with the live version I would very much prefer to have a local WordPress instance to play with.

The last time I set up a local environment for CMS like Joomla or Drupal was more than a decade ago. From what I recall it was just painful and woefully boring. I also hated the mess of dependencies and services that have to be installed and setup specifically for the target OS. On top of that, if the dev machine had a different OS than the production machine, which was common, configuration might not match and thus leading to a series of painful yet necessary debugging steps in order to bring the applications up and running. In fact, a common trick was to create a VM matching the OS used in the production server and then installing the necessary programs.

Read More

Typemock Isolator++: A Run-time Code Modification Method to Unit Testing

Typemock Isolator++ is a library to incorporate a run-time code modification approach for unit testing in C++. Run-time code modification is a method to patch the application without terminating it. Typemock Isolator++ uses trap instructions to intercept and redirect function calls at run-time and it can fake almost anything: global, static methods and members, free functions, non-virtual methods and live instances. It works seamlessly with GoogleTest, Boost.Test, UnitTest++, CppUnit and Microsoft Test and it is compatible with both Windows and Linux.

Read More

Typemock Isolator: A CLR Interception Approach to Unit Testing

Recently I have contacted Typemock and I was offered a license of Typemock Isolator, their flagship tool, in support of my OSS project LINQBridgeVS. Typemock Isolator is a viable tool to incorporate CLR interception for unit testing in .NET.

CLR is the acronym for Common Language Runtime and it is the virtual machine that manages the execution of .NET programs. Typemock Isolator intercepts CLR calls at run-time and offers a fluent API to fake almost anything: static and sealed classes, instances, virtual or non-virtual synchronous and asynchronous methods, P-invoke calls, private and static constructors and also LINQ queries.  Typemock Isolator comes as a Visual Studio extension compatible with Visual Studio from 2010 through 2017 with any .NET Framework version up to 4.7.2 and incorporates four different components: 

Read More

NDepend: A Static Analyser for .NET and .NET Core

NDepend is static analyser for .NET and .NET Core. Recently I was contacted by its creator, Patrick Smacchia, who kindly offered a license in support of my OSS project LINQBridgeVs.

Overview

NDepend is a tool mainly targeted for software architects who want to have a deep insight into their projects. NDepend gathers data from a code base and includes code quality metrics, test coverage statistics, assembly dependencies, evolution and changes, state mutability, usage of tier code, tech debt estimation and more. Another interesting feature is the ability to write custom rules using a domain specific language called CQLinq, which is based on LINQ, C# and the NDepend API.

Read More

Unity (Pre 5.5) Memory Safe Enumerators with C5 Generic Collection Library

DISCLAIMER: The topic treated in this article is only valid for version of Unity up to 5.4

Long time ago I posted an article on how disposable value types were treated in Unity and why they used to generate unnecessary and unwanted garbage. It emerged that in the official Mono compiler as well as in the Microsoft C# compiler (but not in Unity) a violation of the C# specification lead to an optimisation of disposable structs within a using statement. Disposable value types are used in C# mainly to implement iterator blocks, which are used to iterate over collections. Two years ago I  decided to fix this issue by re-implementing the enumerators in a library called C5 which is a project for generic collection classes for C# and other CLI languages. However with the release of Unity 5.5 back in March 2017 version 4.4 of the Mono C# compiler was shipped and finally this issue was properly fixed and became history.

Read More

A Static Code Analysis in C++ for Bullet Physics

Introduction

Hello folks! I’m here again this time to talk about static analysis. If you are a developer with little to no knowledge on the subject this is the right article for you. Static analysis is the process of analyzing the code of a program without actually running it as opposed to dynamic analysis where code is analysed at run time. This process helps developers to identify potential design issues, bugs, to improve performances and to ensure conformance to coding guidelines.Running static analysis regularly is a good practice and if it’s done regularly it helps identifying issues at an early phase of the software development life cycle. It is mainly used on strongly-typed, static languages like C/C++, C# or Java and there are plenty of products out there, either free and commercial. Have a look at this wikipedia article.

Read More

Unity Mono Runtime – The Truth about Disposable Value Types

When I started making games using Unity, after almost 10 years of C# development, I was very concerned to acknowledge that foreach loops are highly avoided in Unity because they allocate unnecessary memory on the heap. Personally I love the clean syntax of a foreach. It aids readably and clarity and it also increases the abstraction level. However a very clear and neat explanation of the memory issue problem can be found in a blog article posted on Gamasutra by Wendelin Reich.

From Wendelin’s analysis it emerged that the version of the Mono compiler adopted in Unity has a different behaviour from Microsoft implementation. In particular enumerators, which are usually implemented in the .NET framework as mutable value types, are boxed by the compiler, causing an unnecessary generation of garbage. Boxing is the process of converting a value type (allocated on the stack) into a reference type, thus allocating a new instance on the heap. 

Read More