1. Setup AWS Account

  2. Provision Required AWS Resources

  3. Provision Lambda Functions for Essentials

  4. Upload Essentials Configurations

  5. Create Scheduled Rules

Setup AWS Account

sosw implementation currently supports only AWS infrastructure. If you are running production operations on AWS, we highly recommend setting up a standalone account for your first experiments with sosw. AWS Organisations now provide an easy way to set sub-accounts from the primary one.

To setup a completely isolated new account, follow the AWS Documentation

We shall require several services, but they are all supposed to fit in the AWS Free Tier. As long as the resources are created using CloudFormation, once you delete the stacks - the related resources will also be deleted automatically to avoid unnecessary charges.

See Cleanup after tutorials instructions in the Tutorials section.

Provision Required AWS Resources

This document shall guide you through the setup process for sosw Essentials and different resources required for them. All the resources are created using Infrastructure as Code concept and can be easily cleaned up if no longer required.


The following Guide assumes that you are running these comands from some EC2 machine using either Key or Role with permissions to control IAM, CloudFormation, Lambda, CloudWatch, DynamoDB, S3 (and probably something else).

If you are running this in the test account - feel free to grant the IAM role of your EC2 instance the policy arn:aws:iam::aws:policy/AdministratorAccess, but never do this in Production.

If you plan to run tutorials after this, we recommend setting this up in us-west-2 (Oregon) Region. Some scripts in the tutorials guidelines may have the region hardcoded.

Now we assume that you have created a fresh Amazon Linux 2 machine with some IAM Role having permissions listed above. You may follow this tutorial if feeling uncertain, just create a new IAM Role on Step 3 of the instance setup Wizard.


Do not run this in Production AWS Account unless you completely understand what is going on!

The following commands are tested on a fresh EC2 instance of type t2.micro running on default Amazon Linux 2 AMI 64-bit.

# Install required system packages for SAM, AWS CLI and Python.
sudo yum update -y
sudo yum install zlib-devel build-essential python3.7 python3-devel git docker -y

# Update pip and ensure you have required Python packages locally for the user.
# You might not need all of them at first, but if you would like to test `sosw`
# or play with it run tests
sudo pip3 install -U pip pipenv boto3

sudo mkdir /var/app
sudo chown ec2-user:ec2-user /var/app

cd /var/app
git clone
cd sosw

# Need to configure your AWS CLI environment.
# Assuming you are using a new machine we shall just copy config with default region
# `us-west-2` to $HOME. The credentials you should not keep in the profile.
# The correct secure way is to use IAM roles if running from the AWS infrastructure.
# Feel free to change or skip this step if your environment is configured.
cp -nr .aws ~/

Now you are ready to start creating AWS resources. First let us provide some shared resources that both sosw Essentials and sosw-managed Lambdas will use.

# Get your AccountId from EC2 metadata. Assuming you run this on EC2.
ACCOUNT=`curl | \
    grep AccountId | awk -F "\"" '{print $4}'`

# Set your bucket name


# Create new CloudFormation stacks
for filename in `ls $PREFIX`; do
    STACK=`echo $filename | sed s/.yaml//`

    aws cloudformation package --template-file $PREFIX/$filename \
        --output-template-file /tmp/deployment-output.yaml --s3-bucket $BUCKETNAME

    aws cloudformation deploy --template-file /tmp/deployment-output.yaml \
        --stack-name $STACK --capabilities CAPABILITY_NAMED_IAM


Now take a break and wait for these resourced to be created. You may observe the changes in the CloudFormation web-console (Services -> CloudFormation).


DO NOT continue until all stacks reach the CREATE_COMPLETE status.

If you later make any changes in these files (after the initial deployment), use the following script and it will update CloudFormation stacks. No harm to run it extra time. CloudFormation is smart enough not to take any action if there are no changes in templates.

Show script

Provision Lambda Functions for Essentials

In this tutorial we were first going to use AWS SAM for provisioning Lambdas, but eventually gave it up. Too many black magic is required and you eventually loose control over the Lambda. The example of deploying Essentials uses raw bash/python scripts, AWS CLI and CloudFormation templates. If you want to contribute providing examples with SAM - welcome. Some sandbox can be found in examples/sam/ in the repository.

# Get your AccountId from EC2 metadata. Assuming you run this on EC2.
ACCOUNT=`curl | \
    grep AccountId | awk -F "\"" '{print $4}'`

# Set your bucket name

for name in `ls /var/app/sosw/examples/essentials`; do
    echo "Deploying $name"

    FUNCTIONDASHED=`echo $name | sed s/_/-/g`

    cd /var/app/sosw/examples/essentials/$FUNCTION

    # Install sosw package locally.
    pip3 install -r requirements.txt --no-dependencies --target .

    # Make a source package.
    zip -qr /tmp/$ *

    # Upload the file to S3, so that AWS Lambda will be able to easily take it from there.
    aws s3 cp /tmp/$ s3://$BUCKETNAME/sosw/packages/

    # Package and Deploy CloudFormation stack for the Function.
    # It will create the Function and a custom IAM role for it with permissions
    # to access required DynamoDB tables.
    aws cloudformation package --template-file $FUNCTIONDASHED.yaml \
        --output-template-file /tmp/deployment-output.yaml --s3-bucket $BUCKETNAME

    aws cloudformation deploy --template-file /tmp/deployment-output.yaml \
        --stack-name $FUNCTIONDASHED --capabilities CAPABILITY_NAMED_IAM

If you change anything in the code or simply want to redeploy the code use the following:

Show script

Upload Essentials Configurations

sosw-managed Lambdas (and Essentials themselves) will automatically try to read their configuration from the DynamoDB table config. Each Lambda looks for the document with a range_key config_name = 'LAMBDA_NAME_config' (e.g. 'sosw_orchestrator_config').

The config_value should contain a JSON that will be recursively merged to the DEFAULT_CONFIG of each Lambda.

We have provided some very basic examples of configuring Essentials. The config files have some values that are dependant on your AWS Account ID, so we shall substitute it and then upload these configs to DynamoDB. It is much easier to do this in Python, so we shall call a python script for that. The script uses some sosw features for working with DynamoDB, so we shall have to install sosw.

cd /var/app/sosw
pipenv run pip install sosw
cd /var/app/sosw/examples/
pipenv run python3

### Or alternatively use old one:
# cd /var/app/sosw/examples/essentials/.config
# python3
# cd /var/app/sosw

Please take your time to read more about Config Sourse and find advanced examples in the guidelines of Orchestrator, Scavenger and Scheduler.

Create Scheduled Rules

The usual implementation expects the Orchestrator and Scavenger to run every minute, while Scheduler and WorkerAssistant are executed per request. Scheduler may have any number of cronned Business Tasks with any desired periodicity of course.

The following script will create an AWS CloudWatch Events Scheduled Rule that will invoke the Orchestrator and Scavenger every minute.


Make sure not to leave this rule enabled after you finish your tutorial, because after passing the free tier of AWS for Lambda functions it might cause unexpected charges.

# Set parameters:

aws cloudformation package --template-file $PREFIX/$FILENAME \
    --output-template-file /tmp/deployment-output.yaml --s3-bucket $BUCKETNAME

aws cloudformation deploy --template-file /tmp/deployment-output.yaml \
    --stack-name $STACK --capabilities CAPABILITY_NAMED_IAM
Manual creation of rules