ABCs of Azure API Management…

APIs are the common denominator of the digital transformation, and most of the applications run on APIs, so what makes a good API Manager and how to get best out of it? Forrester Wave defines a good API Manager with completeness of 3 features:

  • A management portal for API Product Manager where pricing, quotas, and usage of the API can be viewed/managed easily
  • A developer portal, where external/internal developers can keep track of the APIs, (by API key), and create/trace issues, and test the apis.
  • An API Gateway, to secure the communication via access control, quota and rate limits, versioning, authorization, logging and monitoring.

There are more than 20 API management solutions, but our focus will be Azure. For Gartner’s quadrant, Azure API Manager is (also at Forrester Wave [1]) a contender partly because it supports only Azure, there is no on-prem/out of Azure option, no support for a fully automated deployment out of box[2], and also no support for API retirement policies. 

So keeping these in mind, if you need a simple, secure management help for your Azure APIs, API Manager is the right place to start your journey, let’s start! 


<p class="“>As the number of APIs/Functions increase to maintain, you spot a pattern of the similar requirement: each one need throttling, versioning, validation, caching, and logging. Rather than adding these features to each endpoint, you will need a better management solution, possibly a proxy to delegate your workload. Azure API Manager allows to handle all these, and much more, by allowing connectivity to any backend endpoint, either on-prem or on any other cloud. Let’s look at each requirement a bit closer. 

You can import any Open API, REST, RESTful, or SOAP to Azure API Manager. On the portal you have the living API documentation, ready to be tested, with all revisions, change logs of the APIs. Let’s see a couple of features of Azure API Manager.

How it works?

a. API Gateway (Azure Portal)

Azure Portal has 2 roles: One is to manage the APIs, i.e. define the Products, import the APIs, and second part is the role of API Gateway.

When you create a new API Manager, you can click APIs on the blade to import from specifications.  You can configure your APIs’ products, to define whether they require subscription/approval.

API Management has around 40 policies, which covers most of the scenarios, to control your flow from end user to your back end. You can define policies at three sections:

  • inbound (from caller to backend)
  • outbound (from backend to the caller)
  • backend (from the inbound request)

You can either define at product level, which covers all apis for that product or at specific api level. Any policy defined at product level is executed first, then the api level policies are executed.

Inbound validation is perfect for input validation, such as IP Filtering, which I will show below. When the request successfully arrives backend,  you can put policies, such as a timeout for the backend url forward request policy. This would be the url defined at the first place when you import the API, and give a Web Service Url.When the request returns to outbound you can set status code, or use send one way request policy to handle the errors, which I will also show.

Inbound example: 

Part of API Gateway’s responsibility is making sure your apis are secure. One of doing this is via throttling. To apply you have 4 options:

  • Execution scope: Product level or API level
  • Editor: Form based UI or XML editor

As we mentioned, we can define the policies at the product level, which covers all APIs in that group, and is a better option if you have B2B api, with a quota on it. However it does not let granular management as in individual end user limit. This is where key based throttling is helpful, you may want to have rate limit, or quota limit.

You can define basic policies via form based editor. Below is an example for the API subscription level rate limit, because all operations is selected the policy will be applied product level. We simply select All operations and add Inbound policy with Number of calls, and renewal period. The counter key is API Subscription because we can’t get any more granular.

Let’s see an API level In the xml editor, you are flexible to customize your rule, here is checking the IPAddress to throttle the requests. 

b. Publisher Portal

Azure’s management portal for API Manager is called Publisher Portal. The configuration for the APIs are done via Azure Portal, such as adding new productions, defining subscriptions, approvals, policies. Azure Publisher Portal does allow to check the usage/health of the APIs, and to give high level of reports. 

c. Developer Portal

 The developer portal is the place for devs to test their APIs, to see what they have access to, and to create tickets if they have issues with the API. The UI is fully customizable, you can add/remove menu items, content, supported by widgets. 

Hope you enjoyed the Azure API Manager over all. It is a shame it was considered in Forrester Wave report, thus got lower ratings, but I know companies using happily in Production. Have you (not)? Please do shoot any questions/post comments you may have.

There are great training material on Pluralsight, MSDocs, Channel9, if you have not done already!

[1] Microsoft did not reference any clients, thus the calculations were based on Forrester’s experience.

[2] There is a git deployment option but does not cover all configurations, such as users, subscriptions, properties. There are also devops example on Azure samples and a deployment util on Haufe-Lexware github page.

Creating SPNs [Service Principal Names], Service Plans, Azure Web Apps

Every time I deploy a webapp, via VSTS or Octopus, service principal creation process and web app deployment is assumed to be manual and requires to be configured in advance of deployment process. There is always a drop down without allowing you put new service principal name,  and a web app name. [Hope this will change soon, and this post will be unnecessary :)] The meetup demos I attend, the same as well as the msdn documentations don’t mind showing how to add these manually.

However our Azure governance model is Functional Pattern, requires one subscription per environment, and one SPN per resource group, I should be able to create SPN per each environment automatically from scratch to automate our pipeline, plus I don’t like doing things manually…

VSTS Services Octopus Accounts

Part I: Creating SPNs

So, what is an SPN? Think of service accounts. For each application [essentially an identifier Uri], you create a service principal, with a password [or a certificate], and a homepage url.
Azure has RBAC, so you can set any level of permission to any object/user. In basic, it has “Reader”, “Contributor” and “Admin” permissions. For simplicity I will use Contributor role within a specific ResourceGroup. You may want to set a granular permission on the resource group, such as creating app service plans, website by a different service principal, or you may say “Sky is the limit” you can have your role definitions too! Check out the msdn site for role definitions.

Part II: Creating Service Plan and WebApps

For each web application, we need a service plan, like a hosting plan to define:
– Region (West US, East US, etc.),
– Scale count (one, two, three instances, etc.)
– Instance size (Small, Medium, Large)
– SKU (Free, Shared, Basic, Standard, Premium)
And we will deploy our webapp on a service plan with the service principal we have created. The nice thing is Get-AzureRmWebAppPublishingProfile gives you all the deployment account details it has just created [if you are thinking of other deployment methods].
And, one thing we found useful was to set ‘AppServiceUse32BitWorkerProcess’ to true. [Shanselman has a great post about it!]

PowerShell Vaccine for the CyberAttack NotPetya

Ok, another cyber-attack…
Ransomware Petya utilises EternalBlue vulnerability [the same WannaCry used], targeting people who have not done the patch. EternalBlue, exploiting a vulnerability in Microsoft’s SMB protocol, and Microsoft has been published Security Bulletin. Find your patch here:

Or, there is a quick fix:
As announced this morning on BBC website, there is a vaccine, not to kill, but at least stop the ransomware cyber-attack, so called: NotPetya/Petya/Petna/SortaPetya 🙂 Here is the PowerShell version to help you, just put the server names and enter the credentials to the prompt!

function Protect-Perfc {
    param ([Array]$ServerList, [PSCredential]$PSCredential)
    $scriptBlock= {

        function Set-PerfcFile {
            param ([string]$File )

            if (Test-Path -Path $File){
                Write-Output "Item exists"
            else {
                Write-Output "Item does not exist. Creating"
                New-Item -Path $File -Force -ItemType File
            Write-Output "Setting the item readonly property:"

            if (Get-ItemProperty $File -Name IsReadOnly)
                Write-Output "Item is already readonly"
            else {
                Write-Output "Item is not readonly, setting"
                Set-ItemProperty -Path $File -Name IsReadOnly -Value $true
            Write-Output "File ready as readonly: "
            Get-ItemProperty  $file -Name IsReadonly

        Set-PerfcFile "C:\Windows\perfc"


    $serverList| %{ if (Test-Wsman -ComputerName $_  -ErrorAction SilentlyContinue) {
            write-output "WinRM is enabled $_ . Adding to the list."
            else {
            write-output "WinRM is not enabled on $_ . Run 'winrm quickconfig' to enable "
    Write-Output "These servers will be protected"

    Invoke-Command -ScriptBlock $ScriptBlock -Credential $Credential -ComputerName $AccessList
    Write-Output "Finished the protection process"
$Serverlist= @("computer1", "computer2")
$Credential = Get-Credential
Protect-Perfc -ServerList $Serverlist -Credentials $Credential

TeamCity running on Docker

One of the sessions at JaxLondon, Paul Stack mentioned they were running TeamCity on containers at HashiCorp. Because I am doing quite a number of trainings, demos, talks about Continuous Delivery, having the CI server/agents portable and containerised is a big win for me. After I saw JetBrains has the official docker image [for the server and the agents] at DockerHub, I decided to do it sooner, than later.
There are quite things I will cover to have a good touch on containers.

Step1: Setup:
I will use docker’s Mac Toolbox to create TeamCity server and agents. There will be 2 folders required on my host for TeamCity server: data folder, and logs folder to be introduced as volumes to the server container.

Step2: Creating VirtualBox VMs:

I have my Mac Toolbox installed on Mac. Why not Docker-for-Mac, purely I want to rely on VirtualBox to manage my machines, and keep environment variables for Virtualbox VMs.

mymac:~ demokritos$ docker-machine create --driver virtualbox teamcityserver
mymac:~ demokritos$ docker-machine start teamcityserver
Starting "teamcityserver"... (teamcityserver) Waiting for an IP...
Machine "teamcityserver" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses.
You may need to re-run the `docker-machine env` command.
mymac:~ demokritos$ docker-machine env teamcityserver
export DOCKER_HOST="tcp://"
export DOCKER_CERT_PATH="/Users/demokritos/.docker/machine/machines/teamcityserver" export DOCKER_MACHINE_NAME="teamcityserver"
# Run this command to configure your shell: # eval $(docker-machine env teamcityserver)
mymac:~ demokritos$ eval $(docker-machine env teamcityserver)

Step2: Create and share our volumes:

We need to create folders, give permission to our group [my user is in wheel group], and share folder with docker. You can see `stat $folder` to display the permissions.

mymac:~ demokritos$ sudo mkdir -p /opt/teamcity_server/logs
mymac:~ demokritos$ sudo mkdir -p /data/teamcity_server/datadir
mymac:~ demokritos$ sudo chmod g+rw /opt/teamcity_server/logs
mymac:~ demokritos$ sudo chmod g+rw /data/teamcity_server/datadir

And share on Docker preferences
2 Folders to share

This will avoid errors like :
docker: Error response from daemon: error while creating mount source path ‘/opt/teamcity_server/logs’: mkdir /opt/teamcity_server/logs: permission denied.

Step3: Run the docker:

sudo docker run -it --name teamcityserver \
-e TEAMCITY_SERVER_MEM_OPTS="-Xmx2g -XX:MaxPermSize=270m \
-v /data/teamcity_server/datadir:/data/teamcity_server/datadir \
-v /opt/teamcity_server/logs:/opt/teamcity_server/logs \
-p 50004:8111 jetbrains/teamcity-server

If you get an error like :

docker: Error response from daemon: Conflict.
The container name "/teamcityserver" is already in use
by container 4143c2d13192b8020f066b13a2c033750b4ac1ac7d54e822a6b31a5f47489647.
You have to remove (or rename) that container to be able to reuse that name..

Then, if you can find them with “ps -aq”, you can remove them in your terminal, if not open a new one and remove it, i.e:

 docker rm 4143c2d13192 

There is a long discussion on moby’s github site, if you are interested in …

And TC server is ready to be configured… Next, we will set up the agents…

Installing Zabbix 3.2 on AWS Ubuntu 16.04


I had a challenge, to get my Zabbix server up and running on AWS. This initial version is on bash scripts, next versions will be smarter… Zabbix version I will install is 3.2.

A. Setup:

  • Image: Ubuntu Server 16.04 LTS (HVM), SSD Volume Type – ami-a8d2d7ce
  • Type: t2.micro
  • Storage: 8 gig
  • Tag: Name = Zabbix
  • Security group:  SSH [TCP/22], Http[TCP/80] and Http[TCP/10050] for access from anywhere.

B. Installations for Zabbix Server:
#Get the updated repos and install LAMP server. Notice the ^.

$ sudo apt-get update
$ sudo apt-get install lamp-server^

Note the password for mysql as to be used later on :

$ sudo service apache2 restart
$ sudo systemctl enable apache2
$ wget
$ dpkg -i zabbix-release_3.2-1+xenial_all.deb
$ apt-get update
$ sudo apt-get install zabbix-server-mysql zabbix-frontend-php
$ sudo service mysql start

To secure our sql, we need configure options. Say No to change password, Yes to the rest of others questions.

$ sudo mysql_secure_installation

We will create the database zabbix and set a new password. Keep the quotation marks. Notice, we will use the to connect mysql.

$ mysql -uroot -p 
mysql> create database zabbix character set utf8 collate utf8_bin;
mysql> grant all privileges on zabbix.* to zabbix@localhost identified by ''; mysql> quit;

We need to restore the zabbix database onto the one we created. It will prompt you to enter the  to connect the zabbix database.

$ cat /usr/share/doc/zabbix-server-mysql/create.sql.gz |
 mysql -uzabbix -p zabbix

We also need to keep the password in zabbix server configuration:

$ sudo vi /etc/zabbix/zabbix_server.conf
$ sudo service zabbix-server start
$ sudo update-rc.d zabbix-server enable

Change /etc/zabbix/apache.conf, uncomment the php_value for date.timezone to your relevant timezone.

$ sudo vi /etc/zabbix/apache.conf
>php_value date.timezone Europe/London

Restart the apache server:

$ service apache2 restart

Browse your http:///zabbix :
Untitled 16

Note1: If you get errors on the page:

PHP bcmath extension missing (PHP configuration parameter --enable-bcmath).
PHP mbstring extension missing (PHP configuration parameter --enable-mbstring).
PHP xmlwriter extension missing.
PHP xmlreader extension missing.</span>

Run on the server:

$ sudo apt-get install php-bcmath
$ sudo apt-get install php-mbstring
$ sudo apt-get install php-xml

Zabbix discoverer processes more than 75% busy

$ sudo vi zabbix_server.conf
sudo service zabbix-server restart
sudo service apache2 restart 

Lack of free swap space on Zabbix server

sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048
sudo chmod 600 /var/swapfile
sudo mkswap /var/swapfile
echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab
sudo swapon -a 

C. Add agents to Centos/Ubuntu machines :
#Installing Zabbix agent on Ubuntu 16.04:

sudo wget
sudo dpkg -i zabbix-release_3.0-1+xenial_all.deb
sudo apt-get update
sudo apt-get install zabbix-agent
sudo service zabbix-agent start

Installing Zabbix agent on Centos 7.3:

sudo rpm -ivh
sudo yum update
sudo yum install zabbix-agent
sudo service zabbix-agent start

Agent is not starting on Centos 7.3, Permission denied:


# tail -3 /var/log/zabbix/zabbix_agentd.log
$ cat /var/log/audit/audit.log | grep zabbix_agentd | grep denied | tail -1
type=AVC msg=audit(1494325619.250:1410): avc: denied{ setrlimit } forpid=26242 comm="zabbix_agentd" scontext=system_u:system_r:zabbix_agent_t:s0 tcontext=system_u:system_r:zabbix_agent_t:s0 tclass=process

Solution :
Get the required policy and apply the output displayed:

 sudo cat /var/log/audit/audit.log | grep zabbix_agentd | grep denied | tail -1 | sudo audit2allow -M zabbix_agent_setrlimit
******************** IMPORTANT ***********************
To make this policy package active, execute:
semodule -i zabbix_agent_setrlimit.pp</span></pre>
# sudo semodule -i zabbix_agent_setrlimit.pp
# sudo systemctl daemon-reload
# sudo systemctl start zabbix-agent

JaxDevops 2017

I had the chance to attend JaxDevOps London, here is a valuable session from Daniel Bryant about the common mistakes done for Microservices…

    1. Lust [Use the Unevaluated Latest and Greatest Tech]:
      1. Be an expert on Evaluation
      2. Spine Model: Going up the spine solves the problems, not the first step: Tools, but Practices, Principles, Values, Needs.
    2. Gluttony: Communication Lock-In
      1. Don’t rule out RPC [eg. GRPC]
      2. Stick to the Principle of Least Surprise: [Json over Https]
      3. Don’t let API Gateway murphing into EBS
      4. Check the cool tools: Mulesoft,Kong, Apigee, AWS API Gateway
    3. Greed: What Is Mine [within the Org]
      1. “We’ve decided to reform our teams around squads, chapters, and Guilds”:  Be aware of Cargo-Culting:
    4. Sloth: Getting Lazy with NFR:
      1. Ilities: “Availability, Scalability, Auditability, Testability” can be Afterthought
      2. Security: Aaron Grattafiori DockerCon2016 Talk/InfoQ
      3. Thoughtworks: AppSec & Microservices
      4. Build Pipeline:
        1. Perfromance and load testing:
          1. Gatling/JMeter
          2. Flood.IO [upload Gatling script/scale]
        2. Security Testing:
          1. FindSecBugs/OWasp dependency check
          2. Bdd-Security (Owasp Zap)/ Arachi
          3. Gaunltl /Serverspec
          4. Docker Bench for security/Clair
    5. Wrath: Blowing Up When Bad Things Happen
      1. Michael Nyard (Release It) : Turn ops to Simian Army
      2. Distributed Transactions:
        1. Don’t push transactional scope into Single Service
        2. Supervisor/Processor Manager: Erlang OTP, Akka, EIP
      3. Focus on What Matters:
        1. CI/CD
        2. Mechanical Sympathy
        3. Logging
        4. Monitoring
      4. Consider:
        1. DEIS
        2. CloudFoundry
        3. OpenShift
    6. Envy: The Shared Single Domain and (Data Store) Fallacy
      1. Know your DD:
        1. Entities
        2. Value Objects
        3. Aggregates and Roots
        4. Book:
          1. Implementing Domain-Driven Design
          2. Domain-Driven Distilled [high level]
            1. Context Mapping [Static] & Event Storming [Dynamic]
              1. infoq
              2. ziobrando
            2. Data Stores:
              1. RDBMS:
              2. Cassandra
              3. Graph -> Neo4J, Titan
              4. Support! Op Overhead
    7. Pride: Testing in the World
      1. Testing Strategies in a Microservice Architecture [Martin Fowler]
      2. Andew Morgan [Virtual API Service Testing]
      3. Service Virtualisation:
        1. Classic Ones:
          1. CA Service Virtualization
          2. Parasoft Virtualize
          3. HPE Service Virtualization
          4. IBM Test Virtualization Server
        2. New kids:
          1. [SpectoLabs] Hoverfly: Lightweight
            1. Fault Injection
            2. Chaos Monkey
          2. Wiremock
          3. VCR/BetaMax
          4. MounteBank
          5. Mirage









Getting latest workspace…

Getting the latest code from all workspaces can be time consuming, forgetting to do so can cause bigger issues…

So, here is the remedy:

There is a hard coded “d” drive to change the drive and navigate to the code source folders. If your code is on c drive you can just remove it…


get latest


I think it will be handy if it has more error handling as a report at the end, but for a quick solution, it is available…