Microservices in Nameko

Microservices in Nameko

In this introduction, we'll be creating a messenger web application that will be implemented using microservices.

Requirements

Things you should probably have/know to follow along:

  • Python3
  • Docker
  • An understanding of pip and python project layout.
  • Very basic web page structure understanding.

Goals

  • A user can go to a website and send messages
  • A user can see messages that others have sent
  • Messages automatically expire after a configurable amount of time

Preface

(Note, the source code and idea behind this introduction all come from the book Python Programming Blueprints, ISBN 978-1-78646-816-1, which is a fantastic book to read for production ready code written in python - would highly recommend)

Firstly, some background - we will be using Nameko - which is an open-source framework used for implementing microservices in python. RPC (Remote Procedure Calls) will be used to communicate between microservices, via AMQP (Advanced message queueing protocol).

Based on a good implementation of RPC - you'd ensure that your microservice adheres to the single responsibility principle. Meaning - if you were to communicate between microservices, there would be no implementation code from one service in the other - it would almost call the method/function as if it were local to it. As an example - having multiple microservices that work with one another, where most of them send emails - having a sendEmail(userId, template) method available to be called that uses RPC to connect to the Email Microservice would be indicative of Single responsibility principle.

Nameko uses AMQP as the transport mechanism for RPC messages between microservices. When a message is sent from a service - it is placed in a queue, which then gets consumed by the services that can use those messages. Services implemented by Nameko use workers to complete tasks, and when the task is complete, the worker dies. This means that Nameko can spin up multiple workers to complete tasks, and scale by the amount of workers that can be run. Nameko can also be scaled horizontally by increasing the amount of instances that run your services, this is called clustering.Nameko can also respond to requests from other protocols, such as HTTP and websockets.

Setup

The message broker being used for Nameko is RabbitMQ, which allows it to use AMQP. To spin up a local RabbitMQ instance using Docker - use the following (Assuming docker is setup on your local machine):

docker run -d -p 5672:5672 -p 15672:15672 --name rabbitmq rabbitmq

If, like me, you prefer to monitor your RabbitMQ instance with a web interface, use the following instead (Note, both function the same - one just comes bundled with the web interface)

docker run -d -p 5672:5672 -p 15672:15672 --name rabbitmq rabbitmq:3-management

You can access the web interface using, localhost:15672.

To check if your rabbitMQ instance is now running - you can check with:

docker ps

Now, to install Nameko (and because we'll be using tdd - pytest):

pip install nameko pytest

Creating your first Nameko Microservice

  • Create a folder called temp_messenger place a file inside called service.py.
  • Add the following code:

(Basic implementation of a Nameko service)

1
2
3
4
5
6
7
8
from nameko.rpc import rpc

class KonnichiwaService:
    name = 'konnichiwa_service'

    @rpc
    def konnichiwa(self):
        return 'Konnichiwa!'

We import rpc so that we can decorate our methods, exposing them as entry points to our service. An entry point is any method in a Nameko service that acts as a gateway into our service.

In order to create a service, we simply create a class - and assign it a name attribute. The name gives it a namespace - which allows us to make remote calls to it.

Before we test the code, we need to add a configuration file stating the RPC exchange to use, as well as how to access the RabbitMQ instance.

Create a new file in the root of the project called config.yaml with the following content:

AMQP_URI: 'pyamqp://guest:guest@localhost'
rpc_exchange: 'nameko-rpc'

After which, we can run our microservice. To do so, run the following from the root directory (in bash/cmd):

nameko run temp_messenger.service --config config.yaml

Now that you have a running service, how do you make calls to it, or better yet tie services together with RPC calls?

Making a call to the service

To make manual calls to the service we've created, we can use the Nameko shell, which allows you to make rpc calls to your service amongst other things - and can be useful to test the functioning of your service if need be.

To Start a shell, run the following:

nameko shell

In the nameko shell, run the following:

n.rpc.konnichiwa_service.konnichiwa() # Returns Konnichiwa!

Assuming you've done everything correctly up to this point and your service is running, you would be greeted with the text Konnichiwa!.

Unit testing a Nameko Microservice

One of the strengths of nameko is the ability to unit test the services. Create a new folder called tests - and place two files inside, __init__.py and test_service.py. In the tests_service.py file, add the following

1
2
3
4
5
6
7
from nameko.testing.services import worker_factory
from temp_messenger.service import KonnichiwaService

def test_konnichiwa():
    service = worker_factory(KonnichiwaService)
    result = service.konnichiwa()
    assert result == 'Konnichiwa!'

When running outside of the test env, nameko spawns a worker whenever a message is received. Which then dies when the message is processed. For more information around Nameko services, visit: services.

However, for running tests, Nameko provides a way to emulate workers with worker_factory. Which gets passed a service (The KonnichiwaService, in this case) and entry points can then be called on it, and the results accessed.

Then, to run tests, run the following from the root directory: (Assuming you've installed pytest).

pytest

Exposing HTTP Entry points

We'll now create a microservice responsible for handling http requests. First, add to the imports of serivce.py:

from nameko.rpc import rpc, RpcProxy
from nameko.web.handlers import http

And then, beneath the KonnichiwaService, add the following:

1
2
3
4
5
6
7
class WebServer:
    name = 'web_server'
    konnichiwa_service = RpcProxy('konnichiwa_service')

    @http('GET', '/')
    def home(self, request):
        return self.konnichiwa_service.konnichiwa()

Notice, how this is similar to the Konnichiwa service we created, as it has a name attribute, and an entry point (In the form of an http endpoint this time around). We specify that it's a get request, and that the path is the root of the site /. Using the RpcProxy we're able to make RPC calls to other services - we use the name of the service when instantiating the proxy.

To try this out - restart Nameko:

nameko run temp_messenger.service --config config.yaml

You can then access the endpoint on port 8000, meaning, going to the following url, would result in getting the Konnichiwa! response:

http://localhost:8000

Integration testing with Nameko

When writing unit tests - we can make calls to services, but this wont work well for testing communication between services, only the actual function of the services. To test how services integrate, Nameko provides the ability to test multiple services in tandem.

Add the following code to your test_service.py file: (Make sure to also import WebServer from temp_messenger)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
def test_root_http(web_session, web_config, container_factory):
    web_config['AMQP_URI'] = 'pyamqp://guest:guest@localhost'

    web_server = container_factory(WebServer, web_config)
    konnichiwa = container_factory(KonnichiwaService, web_config)
    web_server.start()
    konnichiwa.start()

    result = web_session.get('/')
    assert result.text == 'Konnichiwa!'

Storing messages

We need to store messages on our application. But only temporarily. We could use a relational db - but instead, we're going to use Redis - an in memory db. Meaning faster read/writes. But less reliability (There are mechanisms in place to persist data if needed).

Starting a redis container

To start a redis instance using docker, run the following command:

docker run -d -p 6379:6379 --name redis redis

To access redis using python, we have to install the python client. You do so by running the following:

pip install redis

Using Redis

A brief list of possible commands in Redis:

  • SET : Sets a given key to hold a given string. It also allows us to set an expiration in seconds or milliseconds.
  • GET : Gets the value of the data stored with the given key.
  • TTL : Gets the time-to-live for a given key in seconds.
  • PTTL : Gets the time-to-live for a given key in milliseconds.
  • KEYS : Returns a list of all keys in the data store.

To try out the commands, you can use redis-cli which comes with the docker container redis. To access it - run the following:

This allows you to enter commands in the docker container

docker exec -it redis /bin/bash

Then start up redis-cli:

redis-cli

This shows a redis terminal. To set data, use:

SET <key> <value>

To Get that data:

GET <key>

To retrieve all the keys in the db, use the following:

KEYS *

To save data that will expire in 15 seconds, run:

SET <key> <value> EX 15

Running a TTL (seconds) or a PTTL (milliseconds), will then show you how much longer the message will persist:

TTL <key>

Getting a message after it has expired will return (nil).

Nameko Dependency Providers

When building microservices, Nameko encourages the use of dependency providers to communicate with external resources, such as dbs, servers, or anything that your app depends on. By doing this, you can hide logic that is specific only to that dependency, keeping your service level code clean. For a list of open source dependency providers go to: https://nameko.readthedocs.io/en/stable/community_extensions.html

Adding a Redis Dependency Provider

Redis is an external resource from our application, the perfect candidate for a dependency provider. Create a new folder called dependencies. Inside of that, place a file called redis.py.

We'll now create a client will a simple method, that gets a message when given a key. Add the following to redis.py:

1
2
3
4
5
6
7
from redis import StrictRedis

class RedisClient:
    def __init__(self, url):
        self.redis = StrictRedis.from_url(
            url, decode_response=True
        )

StrictRedis creates a redis connection, instead of passing the individual components (host, port, db) we're passing the url, which allows us to specify each - which is more convenient.

The decode_response means that the response will automatically be decoded into Unicode when received from redis.

Now in the same method, we implement the get_message method, that takes an id and returns that message if it exists.

We also create a RedisError class:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# In the class created above, create the get_message function
    def get_message(self, message_id):
        message = self.redis.get(message_id)

        if message is None:
            raise RedisError("Message not found: {}".format(message_id))

        return message


class RedisError(Exception):
    pass

Creating the dependency provider

So far, we've created the redis client - we now need to create a nameko dependency provider to use this client with our services. In the same redis.py file, update your imports to include: (And implement the code that follows)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Add this to your imports
from nameko.extensions import DependencyProvider

# Then implement the following:

class MessageStore(DependencyProvider):
    def setup(self):
        redis_url = self.container.config["REDIS_URL"]
        self.client = RedisClient(redis_url)

    def stop(self):
        del self.client

    def get_dependency(self, worker_ctx):
        return self.client

Our MessageStore class inherits DependencyProvider - the methods that we implement from it, include:

  • setup: This will be called before the service starts. Which is usually where you'd load configuration. And create clients you'd make use of etc. (Note, we're assigning to self when not in init)
  • stop: When the Nameko service is stopping - this is called.
  • get_dependency: All dependency providers need to implement this method, when an entry point is fired, Nameko creates a worker and injects the result of get_dependency for each dependency specified in the service. In this instance, the worker would have an instance of the RedisClient.

Note, Nameko provides other overrides to better control the dependency providers you implement, for more info - http:/​/url.marcuspen.​com/​nam-​writ.

Creating our message service

In our service.py file, we can make use of the Dependency provider we've just created. Lets start off by creating a new service.

First update the imports at the top of the file, and then create the new service:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
from dependencies.redis import MessageStore

# Message Service implementation
class MessageService:
    name = "message_service"
    message_store = MessageStore()

    @rpc
    def get_message(self, message_id):
        return self.message_store.get_message(message_id)

After implementing your new service, add the following to your config file (Assuming you've started a redis instance with the default port)

REDIS_URL: 'localhost'

Then, in a terminal - start your service using:

nameko run temp_messenger.service --config config.yaml

Assuming your services start up correctly, you can now open a new Nameko shell - and make an example call to your service:

nameko shell

# Then run this to return a message, assuming it's there:
n.rpc.message_service.get_message('<message id>')

If the message exists, you should get a response with the content of the message, otherwise, you get a RedisError - something that we created when we implemented our RedisClient which is used by our dependency provider.

Saving messages

Now we need to be able to save messages to the redis store. We can do this with the SET method/action in redis. To do this, we'd create a new method in our redis dependency provider. The SET method takes both an id, and a message. We could create a method that requires these args - instead, so that the user doesn't have to input the id, we can generate a unique id.

Adding a save message method to our dependency provider

In the redis.py file - add the following import to generate a unique id:

from uuid import uuid4

uuid4 generates a unique string that can be used as a message id.

Add the follow to the RedisClient class in the redis.py file:

1
2
3
4
5
def save_message(self, message):
    message_id = uuid4().hex
    self.redis.set(message_id, message)

    return message_id

The .hex attribute gives us the uuid as a 32 character hexadecimal string. Which we use as a key for the message. And return it.

Adding a save_message RPC

Now, for our other services to make use of the save_message method, we need to be able to call it using RPC - in the MessageService in the service.py file, add the following:

1
2
3
4
@rpc
def save_message(self, message):
    message_id = self.message_store.save_message(message)
    return message_id

Now that we have the ability to save messages, users can save messages as needed. But They still can't see the message unless they know the uuid. To make things easier - lets work out a way to return all the messages in our redis store.

Retrieving all messages

We need to add a new method to the redis dependency provider to return all the messages: (RedisClient in 'redis.py`)

1
2
def get_all_messages(self):
    return [{'id': message_id, 'message': self.redis.get(message_id)} for message_id in self.redis.keys()]

This gets all the messages in the store, note - however - you should not use redis.keys in a large scale application as it will wait until the operation is complete to return the keys which might cause undesirable effects.

Adding a new rpc to our messageservice to get all messages

In our MessageService class - add the following method:

1
2
3
4
@rpc
def get_all_messages(self):
    messages = self.message_store.get_all_messages()
    return messages

Now that we have all that we need to SET/GET/GET ALL messages - we can start trying to show the messages in the browser. This will be done in a future blog post.