Typemock Isolator++: A Run-time Code Modification Method to Unit Testing

Typemock Isolator++ is a library to incorporate a run-time code modification approach for unit testing in C++. Run-time code modification is a method to patch the application without terminating it. Typemock Isolator++ uses trap instructions to intercept and redirect function calls at run-time and it can fake almost anything: global, static methods and members, free functions, non-virtual methods and live instances. It works seamlessly with GoogleTest, Boost.Test, UnitTest++, CppUnit and Microsoft Test and it is compatible with both Windows and Linux.

Licensing System Free For Life

Since September 2019 Typemock Isolator++ Professional has become free, as it is stated on the product page, it’s Free For Life. The company also provides paid support which is €1548 per licence a year.

Installation

Isolator++ is compatible with both Windows and Linux. Before beginning the download it is possible to choose the package for the desired platform:

  • Windows: it has a classic Windows installer with minimal user input needed.
  • Red-hat Package Manager (RPM)
  • Debian package (DEB), created from the RPM using Alien. On some Linux distribution, like Ubuntu, it’s possible to use the GUI to install it if don’t want to use the terminal.

The evaluation license is emailed straight after the download, and once you’ve installed Isolator++ you need to activate it. In the
Getting Started section the installation process is well explained. Bear in mind that the evaluation licence sent with the activation email is only valid for Windows, and you will need to get in contact to receive a valid one for Linux as well.

Integration

Isolator++ works seamlessly with many existing unit testing frameworks:
GoogleTest, Boost.Test, UnitTest++, CppUnit and Microsoft Test. It also integrates with many code coverage tools for both Windows and Linux.

From Visual Studio 2017 15.6, GoogleTest and Boost.Test are both supported
natively by default. Unit tests are automatically discovered after a successful build and available in the Test Windows for execution or debugging. They can be also run directly from the code window by clicking on the icon above the test name.

Running the Examples

There is a good array of examples that comes with the installation and it can be a good place to start. In Windows there are solutions for Visual Studio while in Linux there are Makefiles for both GCC and CLANG.

I decided to compile and run the examples using Ubuntu Cosmic 18.10. Although they already come with an older version of the Google Test framework I decided to use the latest one. Here are the bash commands you need to get the latest GoogleTest sources:

sudo apt-get install libgtest-dev
sudo apt-get install cmake # in case you don't have it 

cd /usr/src/gtest 
sudo cmake CMakeLists.txt 

sudo make
#let's copy the libs to the default path
sudo cp /usr/src/gtest/*.a /usr/lib

I had some initial issues using the provided Makefile for GCC. The linker failed because Isolator++ wasn’t compiled for position independent code. From GCC 6.0 onward position independent execution is enabled by default to support address space layout randomization.
I contacted Typemock support and they quickly replied explaining that I had to disable PIE in the MakeFile for compilation and linking:

-no-pie added to compile the Isolator++ examples

Unexpectedly I ran into another issue where the linker wasn’t able to bind some references to the GoogleTest library. It turned out to be the dual ABI setup between the IsolatorCore library, which is probably compiled using a compiler version pre GCC 5.1 for backward compatibility, and GoogleTest. By setting to 1 or deleting the following define in the stdafx.h everything linked just fine:

#define _GLIBCXX_USE_CXX11_ABI 1

Before running the tests successfully I needed to set these two environment variables: LD_LIBRARY_PATHand LD_BIND_NOW.

# this is used to locate isolator++
export LD_LIBRARY_PATH="/usr/lib64/typemock"

# tells the linker to load dynamic libraries at startup
export LD_BIND_NOW=1

The library path must be added to the environment variable LD_LIBRARY_PATHwhile in order to mock global and static functions
LD_BIND_NOW must be set to 1. There is a brief explanation on the RedHat documentation but in short LD_BIND_NOW instructs the operating system to load dynamic libraries during the application startup:

Finally unit tests ran fine

The cost of writing tests

The creation of unit and integration tests is frequently perceived as a cost rather than a long term investment. The risk of changing or refactoring untested code increases dramatically as the project grows. Such risk is like a debt, it increases over time and it is more expensive to pay back in the long term than investing in writing tests at the very early stages of development.

In new projects teams can draw techniques from test-driven development to make the creation of tests easier, if enough time is allocated by project managers to hit the desired test coverage. However nowadays it is more common to work on existing projects than brand new ones. C++ is an old language and it has been around for almost 40 years, since its creation in 1979. In fact it is common to find systems built on C++ that span over two or even three decades, especially in banking, space and defence industry and also in game engines. Writing tests in isolation for such systems is likely to be very difficult if not impossible without refactoring existing code.

Mock Frameworks and the Proxy Pattern

Mock frameworks are used to automate the creation of mock classes, which are complicated and time-consuming to put together. Mocks are used to imitate behaviours of real objects or to ‘fake’ dependencies of a class. Mock objects have the same interface as the real objects they mimic, allowing a client object to remain unaware of whether it’s using a real object, or a mock object.

In my previous article, in the section ‘Testing using the Proxy pattern’, I discussed how mocking frameworks work under the hood and why they are limited in what they can do. Although the article covered the topic for C# only, mocking frameworks in C++ also relies on polymorphism to hook calls to real objects.

In C++ proxy-based frameworks don’t support mocking of static, free (old C-style functions), global and non-virtual functions. Luckily the flexibility of C++ makes it possible to workaround some of these limitations: for example using composition with templates instead of composition with interfaces.

There are a lot of open source mocking frameworks available for C++: GoogleMock, CppUTest, Boost.Turtle and FakeIt are very popular among developers. MockItNow is no longer maintained and it is the only open source alternative to using binary patching for mocking.

Runtime Binary Patching

Isolator++ doesn’t rely on inheritance to intercept calls. It uses a very different technique that involves patching the test executable at run-time. Run-time code modification has been a subject of different research to find method to fix bugs and security issues at run-time. Mission critical systems, like telecom or banking cannot be stopped to maintain the required level of system availability. Run-time code modification is also referred to as Run-time Binary Patching.

Function interception is achieved through the manipulation of the prologue of the target function and re-routing the function call with an unconditional jump to the user-provided function. The purpose of the prologue is to store the previous stack frame location on the rbp or ebp register
(depending if the executable is compiled for x64 or x86) on the stack and to set the current stack frame location. Instructions from the target function are then placed in a trampoline and the address of the trampoline is placed in a target pointer.

Let’s have a look at this very simple example:

struct Foo {
  int m_anInt;
  Foo(){ m_anInt = 0; }
  
  void DoSomething() const { m_anInt = 10; }
};

TEST(FooTest, FakeMethod) {
  //Arrange
  Foo* foo = new Foo();
  WHEN_CALLED(foo->DoSomething()).Ignore();
   
  //Act
  foo->DoSomething(); //after this call m_anInt should be 10
   
  //Assert
  EXPECT_EQ(foo->m_anInt, 0);
}

The above snippet shows how to use Isolator++ in order to set the expectations in a unit test. There is no need to use a mock for the struct Foo. Binary interception works regardless of the type of object or method being intercepted. Thus Isolator++ can set the expectations on live instances.

Let’s have a look at the assembly code generated for a function before and after the expectations are set by Isolator++. When the test FakeMethodstarts there is no modification to the prologue of the target function DoSomething(left side of the image below). The prologue is changed and the trampoline instruction is injected when the WHEN_CALLEDmacro is called.

The trampoline instruction is a call to a naked function that unconditionally jumps on return to a predetermined location or interceptor function. In this case the trampoline function simply returns to the caller of the function DoSomething() because the expectation is to ignore it. 

Runtime Binary Patching in action

Binary patching is very powerful when used in testing. It reduces the amount of refactoring before it’s possible to start writing tests in isolation because it relaxes any constraint imposed by proxy-based mocking frameworks.

Proxy Pattern vs Binary Interception

In this section I am going to compare the two aforementioned approaches to show how unit tests are written by using both Isolator++ and GoogleMock.

Let’s create a simple class called DbAuththat authenticates users. The class has two member pointers, one for database access called DbUser, and another one for logging called Logger.

struct DbConnection {
 void* m_socket;
 bool OpenConnection();
 void CloseConnection();
};

struct Logger final {
 Logger(const char* name) { /* open the file stream*/ }
 ~Logger() { /* close the file stream*/ }
 void Info(const char* as, ...) { printf("Logging Info"); };
 void Error(const char* as, ...) { printf("Logging Error"); }
 void Warning(const char* as, ...) { printf("Logging Warning"); }
};

//class to access the User table in the db
class DbUser final {
 DbConnection* m_connection;
public:
 DbUser() { 
  m_connection = new DbConnection(); 
  m_connection.OpenConnection();
 }
 ~DbUser() { 
   m_connection.CloseConnection(); 
   delete m_connection; 
 }
 bool IsValidUser(const char*, const char*) const;
};

//used to store any error 
struct Error {
 const char* m_message;
 int m_code;
};

//concrete class to test
class DbAuth final {
 int m_authCode;
 Error m_error;
 DbUser* m_database;
 Logger* m_logger; // a logging class
public:

 DbAuth() : m_authCode(0) { 
   m_database = new DbUser(); //open a connection to the db
   m_logger = new Logger("DbAuth"); //open a file stream
 }
 ~DbAuth() { 
   delete m_database; 
   delete m_logger;
 }
 const Error& GetError() const { return m_error; }
 
 bool AuthenticateUser(const char* username, const char* pwd) {
   m_logger->Info("Authenticating user %s", username);
   if (m_database->IsValidUser(username, pwd)) 
   {  
     m_logger->Info("user %s successfully authenticated", username);
     m_authCode = 10; 
     return true; 
   }
   m_logger->Error("Failed to authenticate the user %s", username);
   m_error.m_message= "Failed to authenticate the user";
   m_error.m_code = -2;
   return false;
 }
};

The class DbAuth has a function called AuthenticateUser to authenticate users given username and password . In its constructor it creates an instance of DbUserandLogger. They respectively open a connection to a database and a file stream. If we wanted to write unit tests for the class DbAuth, it would require some refactoring to avoid hitting the db and the file system. Let’s see how it is possible to test the DbAuthclass without changing its design:

class TestAuthClass : public ::testing::Test {
public:
  void TearDown() override {
   ISOLATOR_CLEANUP();
  }
};

TEST_F(TestAuthClass, AuthenticateValidUser) { 
  DbUser* dbUser = FAKE_ALL<DbUser&gt;();
  FAKE_ALL<Logger&gt;(); 
  
  DbAuth* authObject = new DbAuth();

  auto isValidUser = IS<const char*&gt;([](const char* s)-&gt; bool 
    { return  strcmp(s, "typemock") || strcmp(s, "isolator++"); });
  
  WHEN_CALLED(dbUser-&gt;IsValidUser(isValidUser, isValidUser))
  	.Return(true);

  bool validUserSuccess 
       = authObject-&gt;AuthenticateUser("typemock", "isolator++");
  
  EXPECT_TRUE(validUserSuccess);
}

TEST_F(TestAuthClass, AuthenticateInvalidUser) {
  DbUser* dbUser = FAKE_ALL<DbUser&gt;();
  FAKE_ALL<Logger&gt;(); 

  DbAuth* authObject = new DbAuth();
  auto isInvalidUser = IS<const char*&gt;([](const char* s)-&gt; bool 
     { return  strcmp(s, "An") || strcmp(s, "Intruder"); });
  
  WHEN_CALLED(dbUser-&gt;IsValidUser(isInvalidUser, isInvalidUser))
  	.Return(false);
  
  bool invalidUserFail = authObject-&gt;AuthenticateUser("An","Intruder");
  
  EXPECT_FALSE(invalidUserFail);
  
  EXPECT_EQ(-2, authObject-&gt;GetError().m_code);
  EXPECT_STREQ("Failed to authenticate the user",
     authObject-&gt;GetError().m_message);
}

In the Isolator++ API, the FAKE_ALL macro fakes current and future instances of a class. An enum that specifies the strategy to create the fake can also be passed in and it can be either Call Original or Recursive Fake.
Recursive Fake is the default option used by Isolator++ and it’s one if its special features: it fakes recursively every dependency of a class with a fake instance. This option also initialises every primitive, which is handy to avoid long setup, exceptions or undefined behaviour during testing.

At the beginning of each unit test, the functions FAKE_ALL<Logger> and FAKE_ALL<DbUser> tell Isolator++ to inject/return a mock instead of calling their original constructors.

DbAuth() : m_authCode(0) { 
   m_database = new DbUser(); // Faked instance injected
   m_logger = new Logger("DbAuth"); // Faked instance injected
}

In the two unit tests above, I tested the DbAuth class without hitting the Db, the file-system and without changing its design.

However, one could argue that the design of DbAuth is not ideal. In fact dependencies to other classes should be abstracted away and supplied through the constructor. Such a design approach is called the Bridge pattern.

Following the Bridge pattern DbUser and Logger should be passed in through the constructor of DbAuth. If I were to write the same unit tests using a proxy-pattern framework, I would also need to make DbUser and Logger non-final and change every method I want to set my expectations on to be virtual:

struct Logger {
  Logger(){ /* does nothing, testing only */ }
  Logger(const char* name) { /* open a file stream*/ }
  virtual ~Logger() { }
  virtual void Info(const char* as, ...) { printf("Logging Info"); };
  virtual void Error(const char* as, ...) { printf("Logging Error"); }
  virtual void Warning(const char* as, ...) { printf("Logging Warning"); 
 }
};

//class to access the db
class DbUser {
 DbConnection* m_connection;
public:
 DbUser() { /*Does nothing, testing only */ }
 DbUser(DbConnection* connection) :
 m_connection(connection) { 
    m_connection.OpenConnection();
 }
 virtual ~DbUser() { 
   m_connection.CloseConnection();
 }
 virtual bool IsValidUser(const char*, const char*) const;
};

class DbAuth { 
  int m_authCode;
  Error m_error;
  DbUser* m_database;
  Logger* m_logger; // a logging class
public:
  DbAuth(DbUser* database, Logger* logger) 
   : m_authCode(0)
   , m_database(database)
   , m_logger(logger)
   { }
};

Now that DbUser and Logger can be inherited, let’s re-write the previous tests by using GoogleMock this time:

class MockDbUser : public DbUser
public:
  MOCK_CONST_METHOD2(IsValidUser, bool(const char* username, const char* pwd));
};

class MockLogger : public Logger {
public:
  MOCK_CONST_METHOD2(Info, bool(const char* msg, const char* param));
  MOCK_CONST_METHOD2(Error, bool(const char* msg, const char* param));
  MOCK_CONST_METHOD2(Warning, bool(const char* msg, const char* param));
};

TEST(TestAuthClass, AuthenticateValidUser) {
  MockDbUser mockDbUser;
  MockLogger logger;
  DbAuth dbAuth(&mockDbUser, &logger);
  
  EXPECT_CALL(mockDbEntity, IsValidUser(StrEq("typemock"), StrEq("isolator++")))
     .WillRepeatedly(Return(true));
  
  bool userIsValid = dbAuth.AuthenticateUser("typemock", "isolator++");
  EXPECT_TRUE(userIsValid);
}

TEST(TestAuthClass, AuthenticateInValidUser) { 
  MockDbEntity mockDbEntity;
  MockLogger logger;
  DbAuth dbAuth(&mockDbEntity, &logger);
  
  EXPECT_CALL(mockDbEntity, IsValidUser(StrEq("An"),    StrEq("Intruder")))
  .WillRepeatedly(Return(false)); 
  
  bool userIsInvalid = dbAuth.AuthenticateUser("An", "Intruder");
  
  EXPECT_FALSE(userIsInvalid);
  EXPECT_EQ(-2, dbAuth.GetError().m_code);
  EXPECT_STREQ("Failed to authenticate the user",
    dbAuth.GetError().m_message);
}

In GoogleMock there’s no automatic mock generation and it requires a bit of setup. The mock class must inherit from the real class and each virtual method of the class has to be wrapped with a specific macro
MOCK_METHODn() or MOCK_CONST_METHODn() where n is the number of arguments of the function being mocked. There’s also command-line tool that is written in Python to generate the mock definition given a file name and the abstract class defined in it. However, as stated in the GoogleMock documentation, such a tool may not always work due to the complexity of C++.

Dynamic Dispatch

Dynamic dispatch is the process of selecting which implementation of a polymorphic function to call at run-time. This process has a cost, i.e. virtual methods cannot be in-lined because the linker doesn’t know in advance the address of virtual functions. Despite this, the cost may be negligible in most cases, but if there is a need to squeeze the performance out of a CPU the cost of a v-table look-up could be a problem. In game or physics engines static polymorphism is sometimes preferred for those classes with functions called thousands of times per frame.

Let’s imagine that the class DbUser is performance critical. To avoid a v-table lookup when the IsValidUser function is called let’s declare the dependencies of DbAuth as templates, and let’s make DbUser and Logger final again:

struct Logger {
 ....
}
class DbUser final {
  ....
  bool IsValidUser(const char*, const char*) const;
};

template <class T = DbUser, class U = Logger&gt;
class DbAuth {
  ...
  T* m_database;
  U* m_logger;
public:
  DbAuth(const T* database, const U* logger )
   : m_authCode(0)
   , m_database(database)
   , m_logger(logger)
  { }
  ....
};

DbUser and Logger are final and the IsValidUser function is non-virtual again. The dependencies of DbAuth have been templated away and passed through its constructor. This is a template variant of the bridge pattern that Google defines high performance dependency injection. The two classes have a nicer and cleaner design while using static polymorphism. Let’s write the unit tests using both GoogleMock and Isolator++:

class MockDbUser {
public: 
   MOCK_CONST_METHOD2(IsValidUser,bool(const char* username, const char* pwd));
};

TEST_F(TestAuthClass, AuthenticateValidUserGmock) {
  MockDbUser mockDbUser;
  MockLogger mockLogger;
  
  EXPECT_CALL(mockDbUser, IsValidUser(StrEq("typemock"), StrEq("isolator++")))
  	.Times(1)
  	.WillRepeatedly(Return(true));
  
  DbAuth<MockDbEntity, MockLogger&gt; dbAuth(&amp;mockDbUser, &amp;mockLogger);
  bool userIsValid = dbAuth.AuthenticateUser("typemock","isolator++") ;
  
  EXPECT_TRUE(userIsValid);
}

TEST_F(TestAuthClass, AuthenticateValidUserIsolator) {
  const DbUser* dbUser = FAKE<DbUser&gt;();
  const Logger* logger = FAKE<Logger&gt;();
  
  DbAuth<DbUser, Logger&gt; dbAuth(dbUser, logger);
  
  auto isValidUser = IS<const char&gt;([](const char* s)-&gt; bool
  	{ return  strcmp(s, "typemock") || strcmp(s, "isolator++"); });
  
  WHEN_CALLED(dbUser-&gt;IsValidUser(isValidUser, isValidUser))
  	.Return(true);
  
  bool validUserSuccess = dbAuth.AuthenticateUser("typemock", "isolator++");
  
  int timesCalledTrue = TIMES_CALLED(dbUser-&gt;IsValidUser(isValidUser, isValidUser));
  
  EXPECT_TRUE(validUserSuccess);
  EXPECT_EQ(timesCalledTrue, 1);
}

TEST_F(TestAuthClass, AuthenticateInValidUserGmock) { 
  MockDbUser mockDbUser;
  MockLogger mockLogger;
  DbAuth<MockDbUser, MockLogger&gt; dbAuth(&amp;mockDbUser, &amp;mockLogger);
  
  EXPECT_CALL(mockDbEntity, IsValidUser(StrEq("An"), StrEq("Intruder")))
  	.Times(1)
  	.WillRepeatedly(Return(false)); 
  
  bool userIsInvalid = dbAuth.AuthenticateUser("An", "Intruder");
  
  EXPECT_FALSE(userIsInvalid);
  EXPECT_EQ(-2, dbAuth.GetError().m_code);
  EXPECT_STREQ(2Failed to authenticate the user", dbAuth.GetError().m_message);
}

TEST_F(TestAuthClass, AuthenticateInvalidUserIsolator) {
  const DbUser* dbUser = FAKE<DbUser&gt;();
  const Logger* logger = FAKE<Logger&gt;();
  
  DbAut<DbUser, Logger&gt; dbAuth(dbUser, logger);
  auto isInvalidUser = IS<const char*&gt;([](const char* s)-&gt; bool 
  	{ return  strcmp(s, "An") || strcmp(s, "Intruder"); });
  
  WHEN_CALLED(dbUser-&gt;IsValidUser(isInvalidUser, isInvalidUser))
  	.Return(false);
  
  bool invalidUserFail = dbAuth.AuthenticateUser("An", "Intruder");
  
  int tc = TIMES_CALLED(dbUser-&gt;IsValidUser(isInvalidUser, isInvalidUser));
  
  EXPECT_FALSE(invalidUserFail);
  EXPECT_EQ(tc, 1); 
  
  EXPECT_EQ(-2, dbAuth.GetError().m_code);
  EXPECT_STREQ("Failed to authenticate the user", 
  dbAuth.GetError().m_message);
}

The GoogleMock API allows mixing of setup and assertion. Despite it being cleaner from a coding style point of view this approach violates the AAA paradigm. For example:

// Arrange
EXPECT_CALL(mockDbUser, IsValidUser(StrEq("typemock"), StrEq("isolator++")))
  	.Times(1) // it asserts here if IsValidUser 
                  // is called more than once
  	.WillRepeatedly(Return(true));

The call to Times(1) will assert whenever IsValidUser is called more than the number of times expected during the Act section. In the Isolator++ API it’s not possible to mix assertions with expectations. By design the API strictly follows the AAA paradigm. For example in the following snippet the Arrange is separated from the Assert:

auto isInvalidUser = IS<const char*&gt;([](const char* s)-&gt; bool 
  	{ return  strcmp(s, "An") || strcmp(s, "Intruder"); });

// Arrange
WHEN_CALLED(dbUser-&gt;IsValidUser(isInvalidUser, isInvalidUser))
  	.Return(false);

// Act
 bool invalidUserFail = dbAuth.AuthenticateUser("An", "Intruder");

// Assert
int tc = TIMES_CALLED(dbUser-&gt;IsValidUser(isInvalidUser, isInvalidUser));
EXPECT_EQ(tc, 1);

For conditional behaviour faking Isolator++ has several argument matchers for generic comparison and also custom matchers for object comparison.

Conclusions

Despite C++ offering some niche workarounds through templates to overcome some of the limitations imposed by proxy-based mocking frameworks, it is still up to developers and software architects to come up with an ad hoc design to make testing in isolation possible. Otherwise the alternative is to go through a slow, painful and very risky refactoring process.

Binary patching is a powerful technique for testing because it sets no constraint on code design. It can be very useful on existing code-bases, especially those that are hard to change and to test, in order to achieve good test coverage before any refactoring is needed.

With great power comes great responsibility. In fact, binary patching is not meant to be a shortcut to write poorly designed software. Good design principles and techniques drawn from test-driven development should always be the preferred approach to include tests from the very early stages of development.

Isolator++ is a great tool to automate the creation of mocks and to start writing tests quickly before changing a line of code.

As for the cons, binary patching relies heavily on specific OS libraries, like Microsoft Detour or dlsym for Linux, and CPU architectures. While Isolator++ works on both Windows and Linux, there is no support for embedded devices, MacOS or Console SDKs. If users want to run their tests directly on the target platform it would simply not be possible.

I hope you enjoyed reading the article, I’ll see you soon!

A Static Code Analysis in C++ for Bullet Physics

Introduction

Hello folks! I’m here again this time to talk about static analysis. If you are a developer with little to no knowledge on the subject this is the right article for you. Static analysis is the process of analyzing the code of a program without actually running it as opposed to dynamic analysis where code is analysed at run time. This process helps developers to identify potential design issues, bugs, to improve performances and to ensure conformance to coding guidelines. Continue reading “A Static Code Analysis in C++ for Bullet Physics”

Unity and Reflection – Optimising Memory using Caching on iOS

Summary

Reflection

I really love reflection. Reflection is a technique used for obtaining type information at run-time. It’s not only that, with reflection is possible to examine and change information of objects, to generate (technically to emit IL) new classes, methods and so on still at runtime. It’s a powerful technique but it is known, under certain circumstances, for being slow. If you are a game developer and you are targeting mobile devices (iOS or Android for instance) using Unity, you definitely want to preserve your memory and save precious clock cycles. Moreover, with AOT (Ahead of Time compilation)  IL cannot be emitted at run-time as it is pre-generated at compile time. Therefore a large part of reflection, e.g. expression trees, anonymous types etc., is just not available.

The Problem

Recently I have worked on a dynamic prefab serializer and I needed to use reflection to retrieve types from their string representations. In general to retrieve a type in C# you have three options:

  • typeof(MyClass), which is an operator to obtain a type known at compile-time.
  • GetType() is a method you call on individual objects, to get the execution-time type of the object.
  • Type.GetType(“Namespace.MyClass, MyAssembly”) gives you a type from its string representation at runtime.

Continue reading “Unity and Reflection – Optimising Memory using Caching on iOS”

Deploying Assimp Using Visual Studio and Android NDK for Tegra Devices

Hello folks, welcome back to my blog, hope you are ready for a new adventure. This time I promise it is going to be an adventure with the capital A. I’ve been working on a finite element method algorithm using C++ (and later CUDA) to prove that the latest generation of mobile devices (more specifically the Kepler architecture in the Shield Tablet) is capable of running such complex algorithms.

The Shield is shipped with Android Kit-Kat 4.4 thus using C++ or Java and OpenGL ES 2.0 is not a problem…well not just yet 😀

Setting up the environment is not too difficult too. I used the Tegra Android Development Pack, that installs, all the tools you need to start developing on Android (including extensions for Visual Studio and the whole Eclipse IDE). After a few clicks you have everything up and running.

Summary

The Problem

I need to load 3D models. Albeit I could have written my own parser (which I think it could have been less painful) I decided to use Assimp instead. Assimp is a very handy library that can handle a plenitude of different file formats. I’ve used it extensively in all my projects so far. It supports Android and iOS (as it is stated on its GitHub page).

I read the doc a lot, but I found no easy way (well at least under Windows) to generate a Visual Studio solution (sorry I’m a Visual Studio addicted) to compile it using the Android NDK. I searched on the web for a long while and I found a couple of articles that explain how to compile Assimp for Android (this: Assimp on Desktop and Mobile and this other: Compile Assimp Open Source Library For Android). The procedure is quite troublesome, requires Cygwin under Windows and a lot of patience. Luckily in the second article mentioned above, the author posted a pre-compiled assimp 3.0 version lib with headers included.

Download Assimp 3.0 lib for Android here.

Having Assimp already compiled was truly helpful. It saved me a lot of time that I would have spent figuring out how to put everything together.

Here it comes the tricky part. Assimp was compiled as a shared library (an .so). To reference it is pretty easy. The include and the lib path have to be set and then the name of the library specified. Visual Studio doesn’t use the Android.mk (whereas Eclipse does I think) that tells the Ant build and the the apk builder how pack the apk, which local shared lib to include. It is to be done in the project’s properties instead.

After setting up the whole thing, the solution compiled, linked and the apk was created correctly. I was confident that Assimp would be deployed with the apk, but I soon found out it was not. Surprisingly I got this error instead on the tablet when I ran the application:

Unfortunately, NativeActivity has stopped…

Looking at the LogCat I found this error message too:

Error1

Figure 1

“java.lang.IllegalArgumentException: Unable to load native library: /data/app-lib/com.shield.fem-1/libShieldFiniteElementMethod.so”,  which told me absolutely nothing about the nature of the problem. Fortunately the only thing I knew I changed was the reference to Assimp. It was clear to me what was that the cause of the problem. But why and how wasn’t explained at all by the log files. It was easy to spot it though. I looked at the output window and libassimp.so (see Figure 2 below) was not included at all.

Output library list

Figure 2

The Solutions

I found  two solutions for this issue. I like to call them respectively  “The easy way”, and “The way of pain”. I had already added an external library (I had to use libpng for loading textures), but in that case it went smoothly because it was a static library. Static libraries are .a (or in Windows .lib) files. All the code relating to the library is in this file, and it is directly linked into the program at compile time. Shared libraries are .so (or in Windows .dll, or in OS X .dylib) files. All the code relating to the library is in this file, and it is referenced by programs using it at run-time, reason why it is not deployed with the apk unless explicitly told.

Way of pain

DISCLAIMER: This solution involves rooting your device, so I’m not responsible if warranty voids. Please do it at your own risk

This was my first attempt to shove in libassimp. By default all the libraries stored in /system/lib on the device are loaded automatically at startup, so it is very seamless. If any lib is there the running process can use it. I used the command adb shell, (adb is installed as part of the development pack)  which gave me access to the bash-like shell on the Tablet. As I was expecting Assimp was not in the system lib folder. My first idea was to upload manually the lib into /system/lib so I ran:

 adb push libassimp.so /system/lib

Unless your Android device is rooted and the /system mounted as read-write this is the message you will get:

Failed to copy ‘libassimp.so’ to ‘/system/lib/libassimp.so’: Read-only file system

The only solution as I said is to root your device first. This can be quite painful and it depends on your model. There are a few good guides around. Use google, take a cup of coffee and have a lot of patience. Personally to root mine (a Shield Tegra) I used this guide, and the app adbd Insecure available on google play,  that lets you run adbd in root mode once your device has been rooted.

At this stage I assume your Android friend is rooted so you can finally remount the system folder in order to add read-write permissions. Use this command:

adb shell
root@shieldtablet:/ # mount -o rw,remount /system

Later if you want you can restore its original read-only permission by executing:

adb shell
root@shieldtablet:/ # mount -o ro,remount /system

OK, at that stage I had permissions to do whatever I wanted with system so I was finally able  to upload Assimp. Execututing again the command adb push showed no error this time:

uploading assimp

Figure 3 – Upload has been successful!

At this stage I didn’t have to do anything really. Once the application starts it will load Assimp (and any other libs in there) automatically.

The Easy Way

I found out this easier solution only after I went through hell using the first painful approach (trust me it took me a while to understand how to root the device and which commands to run). Here you don’t need to root your device at all, but you will have to change your code a little bit to dynamically load Assimp (shared libs in general though). Let’s start!

First of all I didn’t know it was possible to upload shared libraries through Visual Studio (d’oh!). I didn’t find it written anywhere (well maybe I didn’t search well) but looking at my projects properties I found this:

project properties

Figure 4

In the Ant build it is possible to specify Native library dependencies! At this very stage I would imagine you laughing knowing what I went through with the “way of pain” 😀

Anyway, I set references to Assimp right here, look at figure 5:

project properties 2

Figure 5

Using this approach the shared library is built seamlessly into the apk! The only drawback is that it won’t be loaded automatically! For this  issue another little trick is needed. If you try to execute/debug your program now, you will likely get again the same error message as in Figure 1.

You need to load any shared library before your native activity. To do this a Java class is to be used. Something like:


package com.your.package;

public class Loader extends android.app.NativeActivity {
   static
   {
     System.loadLibrary("assimp");
   }
}

It is important that Loader.java goes under the folder src in your project and that it is wrapped in a folder structure that respects your package declaration (I know if you’re a Java guy it is evident for you, but I’m more a C#/C++ one so it took me again a while to figure it out 😛 ).

The last bit: change your AndroidManifest.xml android:hasCode must be equal to True and change the android:name in the activity tag from android.app.NativeActivity to Loader (i.e. the name of your Java class)


 <!-- Our activity is the built-in NativeActivity framework class.  This will take care of integrating with our NDK code. -->

That’s finally it!

Conclusions

I’m a total newbie with Android development and it’s been quite hard for me to figure out how to deploy a shared library in Visual Studio as it wasn’t very intuitive. A lot of examples I found online use command line scripts to compile and/or different IDEs. The most common approach is using an .mk file where properties, libraries etc are defined inside. Mk files are (apparently) completely ignored by VS so it wasn’t possible for me to use one.

I really hope this article can help you. I am looking forward to reading your comments, hoping that there are other simpler ways to achieve what I did today.

See you soon!

C++ Tail Recursion Using 64-bit variables – Part 2

In my previous post I talked about recursion problems in a Fibonacci function using 64-bit variables as function parameters, compiled using the Microsoft Visual C++ compiler. It turned out that while tail recursion was enabled by the compiler using 32-bit types it didn’t really when switching to 64-bit ones. Just as a reminder, Tail Recursion is an optimization performed by the compiler. It is the process of transforming certain types of tail calls into jumps instead of function calls. More about tail recursion here.

My conclusion was that tail recursion is not handled properly by the Visual C++ compiler and a possible explanation could be the presence of a bug.

The calculation of Fibonacci sequences of big integers is not an everyday task but it can still be a reliable example to show how tail calls are implemented.

Not happy with my conclusions and following several suggestions of users’ comments (here on the blog, on Reddit and on StackOverflow) I wanted to understand more about this issue and to explore other solutions using different compilers.

Continue reading “C++ Tail Recursion Using 64-bit variables – Part 2”