An NLogN 2D Polygon Splitting Algorithm

I needed to come up with a solution to help designers authoring 2D polys over flat surfaces directly in Unreal Engine 4. I was also asked for a quick way to split up an authored polygon into multiple ones.

In a nutshell, designers wanted to define a 2D polygon, like the one in figure 1 but also have the ability to define internal segments:

Figure 1 – An example of a (split) polygon

I solved the authoring part by extending the UE4 spline component. However, I needed to keep the splitting part simple for both me to implement and for designers to use. I didn’t have much time and I had to find a quick solution that gave the best bang for the buck. I came up with two options:

  • Use one or more splines to partition the polygon.
  • Add another type of vertex to the spline component and use it as an “internal vertex”.
Figure 2 – Option one using splines on the left, option two using internal vertices on the right

The first option seemed appealing as it gave designers more authoring freedom. However, they also had to add more work creating (and authoring) more than one spline and on my end I had to write quite some boilerplate code to support that.

The second option seemed a hack but adding a second type of vertex to the spline component (actually to its metadata) seemed easy to do. And for designers it was definitely the easier option as well.

Maths is fun (really!!!)

After deciding to go with option #2 the only “bit” left was to write a 2D polygon splitting algorithm that given a convex or concave polygon and a list non intersecting segments spitted out a list of sub polygons indices.

Let’s take a look at a concrete example figure 3 and let’s (for fun) define the problem mathematically: $I$ represents the indices of our polygon and $S$ the pairs of indices that split the polygon.

Figure 3 – A poly to split

$ I = \{ k : k \in \mathbb{N} \} $
$ S =\{ {a\brack b} \in I \} $

so we have in figure 3:

$ I = \{ 0, 1, 2, 3, 4, 5, 6, 7 \} $
$ S = \{ (1,7), (1,6), (1,3), (1,4), (6,4) \} $

The expected outcome would be something similar to this:

$ T = \{ (1,7,0), (6,7,1), (6,1,4), (1,2,3), (1,3,4), (4,5,6) \} $

I have to admit that the first algorithm I came up with was just plain inefficient. It only needed to run in the Unreal Engine editor and surely designers weren’t going to create polys with hundred of thousands of vertices, so it turned out to be fine in the end.

I couldn’t live with it

Despite the algorithm wasn’t the fastest (although it worked), I kept thinking about at how inefficient my implementation was. Just to give you a hint, the time complexity was something like $ \theta (n^3)$, let alone space complexity which I’d like to omit if you don’t mind. Let’s pretend the following solution was the first I came up with.

After spending some time thinking through it, I came up with a better solution that runs in $ \theta (N log(N))$ time. It’s not super fast…but surely better than cubic time complexity!

Another one bites the maths

First of all we need to order our segment list S by the min absolute index difference. That is:

$ Min( Abs( a_i – b_i) ) $

Our set $S$ then becomes:

$ S = \{ (1, 3), (6, 4), (1,4), (1, 6), (1,7) \} $

Sorting an array isn’t too expensive (depending on the sorting algorithm of course). As an alternative a priority queue could be used instead, so elements are sorted upon insertion. Taking elements off a priority queue is not constant but it’s relatively cheap: $ \theta(\log n )$.

Now for each index pair in the sorted list $S$, let’s add the indices in between from $I$. Formally:

$ \forall {a\brack b} \in I \rightarrow P = \{ a_i, b_i – 1, …. , b_i – k, b_i : a_i < k < b_i : k \in I \} $

$ I = I – \{ b_i – 1, …., b_i – k \} $

If for any reason $ a_i, b_i $ are not in $I$ then we have intersecting internal segments. Our final list $T$ is going to be the union of P and what’s left in I.

$ T = P \cup I $

First segment in $S$ is $(1, 3) $ . We need to take all the indices still in $I$ between 1 and 3. In this case it’s going to be index 2 only and our first is $ (1, 2, 3) $ .

$ P_1 = \{ 1, 2, 3 \} $

Now we need to remove index 2 from $I$.
$ I = \{ 0, 1, \cancel{2}, 3, 4, 5, 6, 7 \} $

The second segment is $(6, 4)$, the index in between is 5. Let’s apply the same logic:

$ P_2 = \{ 6, 5, 4 \} $

Now remove 5 from $I$:
$ I = \{ 0, 1, \cancel{2}, 3, \cancel{5}, 6, 7 \} $

The third segment is $(1,4)$. Now, the indices in between are 2 and 3, but 2 is no longer in $I$ so we can use index 3 only. Which makes our third poly:

$ P_3 = \{ 1, 3, 4 \} $

And let’s remove 3 from $I$:

$ I = \{ 0, 1, \cancel{2}, \cancel{3}, \cancel{5}, 6, 7 \} $

And finally segment $(1, 7)$. Indices in between are 2, 3, 4, 5, 6, but only 6 is left within this range in $I$. Thus:

$ P_4 = \{ 1, 6, 7 \} $

$ I = \{ 0, 1, \cancel{2}, \cancel{3}, \cancel{5}, \cancel{6}, 7 \} $

So we only actually have $ (0, 1, 7) $ left in $I$ .

Now the last bit, we need to merge $P=\{ P_1, P_2, P_3, P_4 \} $ with what’s left in $I = \{ (0,1,7) \} $:

$ P = \{ (1,2,3), (4,5,6), (1,3,4), (1,6,7) \} $

$ T = \{ (1,2,3), (4,5,6), (1,3,4), (1,6,7) \} \cup \{ (0,1,7) \} $

$ T = \{ (1,2,3), (4,5,6), (1,3,4), (1,6,7), (0,1,7) \} $

Show me the code!

Ok ok…if you’ve had enough of this math nonsense so have I. Let’s put together a simple C++ script that does the job. Let’s first define the struct Segment, which contains the pair of indices to split the polygon and the comparison operator greater than and less than.

struct Segment
	int32_t indexFrom;
	int32_t indexTo;

	bool operator>(const Segment& other) const
		return fabs(indexFrom - indexTo) > fabs(other.indexFrom - other.indexTo);

	bool operator<(const Segment& other) const
		return fabs(indexFrom - indexTo) < fabs(other.indexFrom - other.indexTo);

Now the struct Poly that contains a list of indices

struct Poly
	std::vector<int32_t> indices;
                // let's initialise the indices

and now the main function that does the work. I decided to use an inverted priority queue, that is a heap where the root is the smallest element.

bool find_polys(std::set<uint32_t>& indices, std::priority_queue<Segment, std::vector<Segment>, std::greater<Segment>>& segments, std::vector<Poly>& polys)
        // we know there are going to be S + 1 polys
	polys.reserve(segments.size() + 1);

		const Segment& segment =;
		auto& polyIndices = polys.back().indices;
		int32_t indexFrom = fmin(segment.indexFrom, segment.indexTo);
		int32_t indexTo = fmax(segment.indexFrom, segment.indexTo);


		// if segment indices from and to have already been removed then we have intersecting segments!
		auto from = indices.find(indexFrom);
		if (from == indices.end() || indices.find(indexTo) == indices.end())
			return false;
                // let's add indices in between from and to from I
		while (from != indices.end() && *from < indexTo)
			from = indices.erase(from);
        // anything left in indices should be copied over to last poly in the list
        // this should be the equivalent of a memcpy
	std::copy(indices.begin(), indices.end(), std::back_inserter(polys.back().indices));

	return true;

Asymptotic Complexity

It seemed to me the time complexity is $ \theta (N log(N))$ where the linear part is iterating over the segments and the logarithmic bit is for both adding the indices in between (and popping out the segment of the queue). I didn’t trust myself though so I installed google benchmark and wrote some code to stress test find_poly. I was not very familiar with it before, but it’s a well established library from google to benchmark code snippets and it also calculates asymptotic complexity! Exactly what I needed 🙂

For the benchmark I tested the function with with an increasing number of indices from 10 to 100,000. For the test I generated automatically the segments between the indices so that the polygon would be fan triangulated.

This is the benchmark code:

static void BM_find_poly(benchmark::State& state) 
	int numVertices = state.range(0);
	int numSegments = 0;
	for (auto _ : state)
		std::vector <Poly> polys;
                // let's build the input
		std::set<uint32_t> indices;
		std::priority_queue<Segment, std::vector<Segment>, std::greater<Segment>> segments;
		generateSegments(numVertices, segments, indices);
		numSegments = segments.size();
		find_polys(indices, segments, polys);
        // set the number of elements
	state.SetComplexityN(numSegments + numVertices);

BENCHMARK(BM_find_poly)->Unit(benchmark::kMicrosecond)->RangeMultiplier(10)->Range(10, 1E5)->Complexity();

Figure 5 – Yes I called my PC Thanos…

And boom, I was right! yay! My implementation is indeed $ \theta (N log(N))$! I didn’t trust google benchmark (but I should have) so I plotted a graph in Excel using the output data:

Time complexity is in fact NLog(N)!


While it’s not an algorithm I’d personally employ at runtime in a game, it isn’t too slow to be used in an editor ( 32ms to process 100,000 vertices and 99,997 segments) considering I tested it on an old CPU from 2013.

I hope you have enjoyed the article, and you liked my 2D polygon splitting approach. If you manage to come up with a more efficient solution get in touch and let me know!

Typemock Isolator++: A Run-time Code Modification Method to Unit Testing

Typemock Isolator++ is a library to incorporate a run-time code modification approach for unit testing in C++. Run-time code modification is a method to patch the application without terminating it. Typemock Isolator++ uses trap instructions to intercept and redirect function calls at run-time and it can fake almost anything: global, static methods and members, free functions, non-virtual methods and live instances. It works seamlessly with GoogleTest, Boost.Test, UnitTest++, CppUnit and Microsoft Test and it is compatible with both Windows and Linux.

Read More

A Static Code Analysis in C++ for Bullet Physics


Hello folks! I’m here again this time to talk about static analysis. If you are a developer with little to no knowledge on the subject this is the right article for you. Static analysis is the process of analyzing the code of a program without actually running it as opposed to dynamic analysis where code is analysed at run time. This process helps developers to identify potential design issues, bugs, to improve performances and to ensure conformance to coding guidelines.Running static analysis regularly is a good practice and if it’s done regularly it helps identifying issues at an early phase of the software development life cycle. It is mainly used on strongly-typed, static languages like C/C++, C# or Java and there are plenty of products out there, either free and commercial. Have a look at this wikipedia article.

Read More

Deploying Assimp Using Visual Studio and Android NDK for Tegra Devices

Hello folks, welcome back to my blog, hope you are ready for a new adventure. This time I promise it is going to be an adventure with the capital A. I’ve been working on a finite element method algorithm using C++ (and later CUDA) to prove that the latest generation of mobile devices (more specifically the Kepler architecture in the Shield Tablet) is capable of running such complex algorithms.

The Shield is shipped with Android Kit-Kat 4.4 thus using C++ or Java and OpenGL ES 2.0 is not a problem…well not just yet 😀

Setting up the environment is not too difficult too. I used the Tegra Android Development Pack, that installs, all the tools you need to start developing on Android (including extensions for Visual Studio and the whole Eclipse IDE). After a few clicks you have everything up and running.

Read More

C++ Tail Recursion Using 64-bit variables – Part 2

In my previous post I talked about recursion problems in a Fibonacci function using 64-bit variables as function parameters, compiled using the Microsoft Visual C++ compiler. It turned out that while tail recursion was enabled by the compiler using 32-bit types it didn’t really when switching to 64-bit ones. Just as a reminder, Tail Recursion is an optimization performed by the compiler. It is the process of transforming certain types of tail calls into jumps instead of function calls. More about tail recursion here.

My conclusion was that tail recursion is not handled properly by the Visual C++ compiler and a possible explanation could be the presence of a bug.

The calculation of Fibonacci sequences of big integers is not an everyday task but it can still be a reliable example to show how tail calls are implemented.

Not happy with my conclusions and following several suggestions of users’ comments (here on the blog, on Reddit and on StackOverflow) I wanted to understand more about this issue and to explore other solutions using different compilers.

Read More

C++ Tail Recursion Using 64-bit variables

For this second coding adventure I want to share with you a problem I run into comparing iterative and recursive functions in C++. There are several differences between recursion and iteration, this article explains the topic nicely if you want to know more. In general languages like Java, C, and Python, recursion is fairly expensive compared to iteration because it requires the allocation of a new stack frame. It is possible to eliminate this overhead in C/C++ enabling compiler optimization to perform tail recursion, which transforms certain types of recursion (actually, certain types of tail calls) into jumps instead of function calls. To let the compiler performs this optimization it is necessary that the last thing a function does before it returns is call another function (in this case itself). In this scenario it should be safe to jump to the start of the second routine. Main disadvantage of Recursion in imperative languages is the fact that not always is possible to have tail calls, which means an allocation of the function address (and relative variables, like structs for instance) onto the stack at each call. For deep recursive function this can cause a stack-overflow exception because of a limit to the maximum size of the stack, which is typically less than the size of RAM by quite a few orders of magnitude.

I have written a simple Fibonacci function as an exercise in C++ using Visual Studio to test Tail Recursion and to see how it works: Read More