Building Kytra an intelligent investment brokerage

Part 1 - An introduction

Author: Ben Toogood

02 December 2019

  • golang
  • kubernetes
  • micro
  • microservices
  • investing

At Kytra we’re on a mission to simplify investing, this is the story of how we built the platform.

At Kytra we’re on a mission to simplify investing. Earlier this month we launched our app on both iOS and Android. This is the story of how we built the backend platform, and how you can leverage the same technologies to build a similar platform yourself.

I will outline the requirements which influenced the design of the platform and alternative technologies to the ones we used. Later in the series, you will learn the steps required to deploy this architecture, and launch multiple services of your own.


When we began architecting our backend platform, the first decision we had to make was a critical one; do we build a microservices stack or a monolithic application.

A common mistakes I’ve seen other companies make while implementing a microservice architecture is to not properly understand the business domains it’s modelled around. This risk is multiplied for early stage businesses where products and technologies evolve rapidly.

Despite the myriad of risks we decided to go ahead and build our backend as a collection of microservices. Whilst the benefits of microservices have been outlined many times before (see this post by Martin Fowler), there are two main benefits which influenced this decision:

Separation of concerns: We knew when starting out that investing platforms were complex and adding intelligence would compound the problem. Distributing the business logic across multiple applications makes it easier to isolate logic and reduce the chance of a change in one part of the platform impacting another unexpectedly.

Cost: As a bootstrapped business, our tech budget is extremely lean. As such, the running cost of the platform plays a big part in influencing our decisions. We’re currently running a total of 80 services across two environments for a at a cost of less than £250 per month. Additionally, we estimate that only 1–2% of the platforms scale is currently being utilised — you can’t run less than one replica of each service. We’re confident we can scale our user base 10x with little impact on platform costs.

Service Definitions

Before deciding on any implementation details, we wanted to know which services would be required to power our V1 platform. Protocol buffers provide an excellent way of defining APIs and allow you to refine your API definitions without writing any code.

“Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data.”

Below is the protocol buffer for our Users Service. I’ll dig deeper into this in a later post in the series.

syntax = "proto3";

service Users {
 rpc Create(User) returns (User) {}
 rpc Get(User) returns (User) {}
 rpc Update(User) returns (User) {}
 rpc List(ListRequest) returns (ListResponse) {}
 rpc Search(SearchRequest) returns (ListResponse) {}

message User {
 string uuid = 1;
 string first_name = 2;
 string last_name = 3;
 string email = 4;
 string profile_picture_id = 5;
 int64 created_at = 6;
 string username = 7;
 string phone_number = 8;

message ListRequest {
 repeated string uuids = 1;
 repeated string phone_numbers = 2;

message ListResponse {
 repeated User users = 1;

message SearchRequest {
 string query = 1;
 int32 limit = 2;

Although the above example is modelled quite closely on a single model, it’s critical to model your services on business domains rather than models. When designing our services, we follow the Single Responsibility Principle, as described by Robert C. Martin:

The Single Responsibility Principle (SRP) states that each software module should have one and only one reason to change.

We advocate that micro-services should rarely be changed but frequently extended upon. The frequency in which you make major changes to your services is a good indicator of how tightly bound they are to their context.

Interservice Communication

The next step we took in designing our platform was deciding how we’d communicate between our services. Whilst we would most likely use one language for the majority of our backend, we wanted Kytra to be language agnostic.

For communicating between our front-end (ReactNative) and backend, we decided to use HTTP / JSON because of it’s great support and ease of use. However this isn’t very efficient for inter-service communication because encoding / decoding JSON is slow and very memory intensive. As such we opted instead to use RPC for synchronise communication. This article does a great job of outlining the benefits.

As an intelligent investment platform, we’re processing a high volume of data at any given time. In order to keep API response times low (<100ms) we want to do as much processing as possible asynchronously. We opted to use a hosted RabbitMQ solution for our async messaging, and have had great results so far. I will outline an example of how we use async messaging later in the series.

Writing the services

The next decision we had to make was selecting the primary programming language to write our services. I have come from a Ruby background, and whilst it’s a fun language to work with, it didn’t satisfy the requirements we’d laid out for this build:

Statically Typed: Using a statically typed language is a must when building software which handles currency. Dynamic conversion between data types can result in inaccuracies and rounding errors.

Compiled: Using a compiled language makes code safer and results in many bugs being caught during compilation. Compiled applications are also fast and resource efficient, resulting in a cheaper platform to run.

Fast to build with: Speed is everything for an early stage startup, as summarised in Facebook’s first motto: “Move fast and break things”.

Having reviewed a few options, it was clear that Golang (Go) was the perfect fit. It’s a statically typed, compiled programming language developed by Google and maintained by a great open-source community. It’s fast to build with, and it has amazing support for both RPC and HTTP.

Choosing the framework: Micro

Communicating effectively between microservices requires a significant amount of network logic. We searched for an existing library to solve this problem as we wanted to focus on the business logic and not networking. I looked at Go kit, Monzo’s Typhon and finally Micro. These frameworks each have a different approach to the problem, but I found that Micro’s solved our problem best, I will elaborate more on this in the next post.

To illustrate how simple it is to utilise the Micro framework here is an example of how we create a service and register it to service discovery (in our case, Kubernetes).

package main

import (
  proto ""
  _ ""

func main() {
  service := micro.NewService(
  proto.RegisterUsersHandler(service.Server(), handler.New())
  if err := service.Run(); err != nil {

You can see in the above example that we’ve importing a package referred to as proto. This is a package generated by Micro using our protocol buffer.

When another service in our platform needs to call the UsersService, it simply imports the proto package and creates a new client. The proto package includes all the necessary structs and functions needed in order to utilise the UsersService without writing any networking code. For example, we can call the Update RPC endpoint as follows:

params := &users.User{Uuid: u.UUID, FirstName: "John"}
user, err := handler.usersSrv.Update(context.TODO(), params)

Up Next

I’m planning on writing the following parts to this series. I’ll update the links below as the parts become available. I’d love to get your feedback so if you have any thoughts or requests for topics for me to cover, drop me an email at or tweet me @Ben_Toogood.