ABCs of Azure API Management…

APIs are the common denominator of the digital transformation, and most of the applications run on APIs, so what makes a good API Manager and how to get best out of it? Forrester Wave defines a good API Manager with completeness of 3 features:

  • A management portal for API Product Manager where pricing, quotas, and usage of the API can be viewed/managed easily
  • A developer portal, where external/internal developers can keep track of the APIs, (by API key), and create/trace issues, and test the apis.
  • An API Gateway, to secure the communication via access control, quota and rate limits, versioning, authorization, logging and monitoring.

There are more than 20 API management solutions, but our focus will be Azure. For Gartner’s quadrant, Azure API Manager is (also at Forrester Wave [1]) a contender partly because it supports only Azure, there is no on-prem/out of Azure option, no support for a fully automated deployment out of box[2], and also no support for API retirement policies. 

So keeping these in mind, if you need a simple, secure management help for your Azure APIs, API Manager is the right place to start your journey, let’s start! 

Why?

<p class="“>As the number of APIs/Functions increase to maintain, you spot a pattern of the similar requirement: each one need throttling, versioning, validation, caching, and logging. Rather than adding these features to each endpoint, you will need a better management solution, possibly a proxy to delegate your workload. Azure API Manager allows to handle all these, and much more, by allowing connectivity to any backend endpoint, either on-prem or on any other cloud. Let’s look at each requirement a bit closer. 

You can import any Open API, REST, RESTful, or SOAP to Azure API Manager. On the portal you have the living API documentation, ready to be tested, with all revisions, change logs of the APIs. Let’s see a couple of features of Azure API Manager.

How it works?

a. API Gateway (Azure Portal)

Azure Portal has 2 roles: One is to manage the APIs, i.e. define the Products, import the APIs, and second part is the role of API Gateway.

When you create a new API Manager, you can click APIs on the blade to import from specifications.  You can configure your APIs’ products, to define whether they require subscription/approval.

API Management has around 40 policies, which covers most of the scenarios, to control your flow from end user to your back end. You can define policies at three sections:

  • inbound (from caller to backend)
  • outbound (from backend to the caller)
  • backend (from the inbound request)

You can either define at product level, which covers all apis for that product or at specific api level. Any policy defined at product level is executed first, then the api level policies are executed.

Inbound validation is perfect for input validation, such as IP Filtering, which I will show below. When the request successfully arrives backend,  you can put policies, such as a timeout for the backend url forward request policy. This would be the url defined at the first place when you import the API, and give a Web Service Url.When the request returns to outbound you can set status code, or use send one way request policy to handle the errors, which I will also show.

Inbound example: 

Part of API Gateway’s responsibility is making sure your apis are secure. One of doing this is via throttling. To apply you have 4 options:

  • Execution scope: Product level or API level
  • Editor: Form based UI or XML editor

As we mentioned, we can define the policies at the product level, which covers all APIs in that group, and is a better option if you have B2B api, with a quota on it. However it does not let granular management as in individual end user limit. This is where key based throttling is helpful, you may want to have rate limit, or quota limit.

You can define basic policies via form based editor. Below is an example for the API subscription level rate limit, because all operations is selected the policy will be applied product level. We simply select All operations and add Inbound policy with Number of calls, and renewal period. The counter key is API Subscription because we can’t get any more granular.


Let’s see an API level In the xml editor, you are flexible to customize your rule, here is checking the IPAddress to throttle the requests. 


b. Publisher Portal

Azure’s management portal for API Manager is called Publisher Portal. The configuration for the APIs are done via Azure Portal, such as adding new productions, defining subscriptions, approvals, policies. Azure Publisher Portal does allow to check the usage/health of the APIs, and to give high level of reports. 



c. Developer Portal

 The developer portal is the place for devs to test their APIs, to see what they have access to, and to create tickets if they have issues with the API. The UI is fully customizable, you can add/remove menu items, content, supported by widgets. 

Hope you enjoyed the Azure API Manager over all. It is a shame it was considered in Forrester Wave report, thus got lower ratings, but I know companies using happily in Production. Have you (not)? Please do shoot any questions/post comments you may have.

There are great training material on Pluralsight, MSDocs, Channel9, if you have not done already!

[1] Microsoft did not reference any clients, thus the calculations were based on Forrester’s experience.

[2] There is a git deployment option but does not cover all configurations, such as users, subscriptions, properties. There are also devops example on Azure samples and a deployment util on Haufe-Lexware github page.

Creating SPNs [Service Principal Names], Service Plans, Azure Web Apps

Every time I deploy a webapp, via VSTS or Octopus, service principal creation process and web app deployment is assumed to be manual and requires to be configured in advance of deployment process. There is always a drop down without allowing you put new service principal name,  and a web app name. [Hope this will change soon, and this post will be unnecessary :)] The meetup demos I attend, the same as well as the msdn documentations don’t mind showing how to add these manually.

However our Azure governance model is Functional Pattern, requires one subscription per environment, and one SPN per resource group, I should be able to create SPN per each environment automatically from scratch to automate our pipeline, plus I don’t like doing things manually…

VSTS Services Octopus Accounts

Part I: Creating SPNs

So, what is an SPN? Think of service accounts. For each application [essentially an identifier Uri], you create a service principal, with a password [or a certificate], and a homepage url.
Azure has RBAC, so you can set any level of permission to any object/user. In basic, it has “Reader”, “Contributor” and “Admin” permissions. For simplicity I will use Contributor role within a specific ResourceGroup. You may want to set a granular permission on the resource group, such as creating app service plans, website by a different service principal, or you may say “Sky is the limit” you can have your role definitions too! Check out the msdn site for role definitions.


function New-AzureSpn{
param([string]$Subscriptionid ,
[string]$environmentName,
[string]$ApplicationName,
[string]$resourceGroupName,
[string]$location,
[string]$password
)
#Login-AzureRmAccount
$DisplayName =$ApplicationName+ $environmentName + "SPN"
$HomePage = "http://$applicationName.clouddev.com&quot;
$IdentifierUri = "http://$applicationName.clouddev.com&quot;
#refer the first script
###########################################################################
#Step1: Check there is no existing AzureAD Application with the same Uri:
#Names are not unique, but the IdentifierUri should be unique
###########################################################################
$clientApplication=Get-AzureRmADApplication -IdentifierUri $identifierUri
If ($clientApplication) {
Write-Output "There is already an AD Application for this URI: $identifierUri"
#Either remove, stop the process and rename the Uri, or get the ADApplication, which is the default behavior here:
#Remove-AzureRmADApplication -ObjectId $clientApplication.ObjectId -Force
}
else
{
$clientApplication = New-AzureRmADApplication -DisplayName $displayName -HomePage $homePage -IdentifierUris $identifierUri -Password $password -Verbose
Write-Output "A new AzureAD Application is created for this URI: $identifierUri"
}
################################################
#Step2: Create Application and SPN:
################################################
$clientId = $clientApplication.ApplicationId
Write-Output "Azure AAD Application creation completed successfully (Application Id: $clientId)" -Verbose
if((Get-AzureRmADServicePrincipal -ServicePrincipalName $clientId -ErrorAction SilentlyContinue) -eq $null){
$spn = New-AzureRmADServicePrincipal -ApplicationId $clientId
}
else {
$spn = Get-AzureRmADServicePrincipal -ServicePrincipalName $clientId
}
# This will allow it to regenerate keys of all the Storage Accounts in the subscription
$spnRole = "Contributor"
$resourceGroup=Get-AzureRmResourceGroup -Name $resourceGroupName -ErrorAction SilentlyContinue
if ( $null -eq $resourceGroup)
{
New-AzureRmResourceGroup -Name $resourceGroupName -Location $location
}
New-AzureRmRoleAssignment -RoleDefinitionName $spnRole -ServicePrincipalName $clientId -Verbose -ResourceGroupName $resourceGroupName
################################################
#Step3: Login to verify the SPN
################################################
#If you have not yet,now we can login and verify our new spn:
#Login-AzureRmAccount -Credential $creds -ServicePrincipal -TenantId $tenantId
#Use the subscriptionId parameter from loginazuresubscription script.
$tenantId= (Get-AzureRmSubscription -SubscriptionId $subscriptionid).TenantId
$objectId=$spn.Id
#Cleanup actions enable below if you wanted to clean up the azure ad application
#$clientApplication | Remove-AzureRmADApplication -ObjectId $_.ObjectId -Force
################################################
#Step4: Get the packer/VSTS/Octopus info
################################################
Write-Host 'subscription_Id :' $subscriptionid.tostring()
Write-Host 'tenant_Id : ' $tenantId.ToString()
Write-Host 'object_id :' $objectId.ToString()
write-Host 'client_id/Username/SPN Name :' $clientId.ToString()
write-Host 'client_secret/Password : ' $password.ToString()
Write-Host 'spn_displayname: ' $DisplayName.Tostring()
}
$params= @{
Password=New-Guid
ResourceGroupName='resourcegroup'
Subscriptionid='xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
Location='location'
ApplicationName='app'
EnvironmentName='dev'
}
Login-AzureRmAccount
New-AzureSpn @params

Part II: Creating Service Plan and WebApps

For each web application, we need a service plan, like a hosting plan to define:
– Region (West US, East US, etc.),
– Scale count (one, two, three instances, etc.)
– Instance size (Small, Medium, Large)
– SKU (Free, Shared, Basic, Standard, Premium)
And we will deploy our webapp on a service plan with the service principal we have created. The nice thing is Get-AzureRmWebAppPublishingProfile gives you all the deployment account details it has just created [if you are thinking of other deployment methods].
And, one thing we found useful was to set ‘AppServiceUse32BitWorkerProcess’ to true. [Shanselman has a great post about it!]


###############################################
function Add-Account {
param(
[string]$AzureTenantId,
[string]$AzureServicePrincipalName,
[string]$AzureSPNPassword
)
###############################################
##Step1: Get Variables
$SPNNamingStandard='^[–z]{5,40}$'
###############################################
##Step2: Validate Variables:
if (!($AzureTenantId -match $SPNNamingStandard))
{
Write-Output "SPN is not in the right format"
}
###############################################
##Step3: Create Account
Write-Output "Creating Account"
$SecurePassword = ConvertTo-SecureString -asplaintext -force $AzureSPNPassword
$SecureCredential = New-Object System.Management.Automation.PSCredential ($AzureServicePrincipalName, $SecurePassword)
write-output '###############################################'
write-output '##Step4: Login to the SPN Account'
try{
write-output "Adding AzureRM Account"
Add-AzureRmAccount -ServicePrincipal -Tenant $AzureTenantId -Credential $SecureCredential
}
catch {
Write-Output $_
throw "Cannot add account $AzureServicePrincipalName"
}
}
###############################################
###############################################
## Check and Create Service Plan
function New-AzureAppServicePlan{
param([string]$ResourceGroupName,
[string]$AppServicePlanName,
[string]$Location,
[string]$AppServicePlanNumberofWorkers,
[string]$AppServicePlanWorkerSize,
[string]$AppServicePlanTier
)
try{
$ServicePlan= Get-AzureRmAppServicePlan -ResourceGroupName $ResourceGroupName -Name $AppServicePlanName -ErrorAction SilentlyContinue
if ($null -eq $ServicePlan)
{
$ServicePlan=New-AzureRmAppServicePlan -Name $AppServicePlanName -Location $Location -ResourceGroupName $ResourceGroupName -Tier $AppServicePlanTier -WorkerSize $AppServicePlanWorkerSize -NumberofWorkers $AppServicePlanNumberofWorkers
}
}
catch{
Write-Output "Cannot add serviceplan : $AppServicePlanName "
Write-Output $_
Throw "Something went wrong"
}
return $ServicePlan
}
###############################################
## Check and Create Web App
function New-AzureWebApp {
param(
[bool]$AppServiceUse32BitWorkerProcess,
[string]$Location,
[string]$PublishProfilePath,
[string]$ResourceGroupName,
[string]$WebAppName
)
try{
$WebApp = Get-AzureRmWebApp -ResourceGroupName $ResourceGroupName -Name $WebAppName -ErrorAction SilentlyContinue
if($null -eq $WebApp)
{
$WebApp = New-AzureRmWebApp -Name $WebAppName -AppServicePlan $AppServicePlanName -ResourceGroupName $ResourceGroupName -Location $Location
}
Set-AzureRmWebApp -ResourceGroupName $ResourceGroupName -Name $WebAppName -Use32BitWorkerProcess $AppServiceUse32BitWorkerProcess
if (!(Test-Path -Path (split-path $PublishProfilePath -Parent))){
throw [System.IO.FileNotFoundException] "$PublishProfilePath not found."
}
$profile = Get-AzureRmWebAppPublishingProfile -OutputFile $PublishProfilePath -ResourceGroupName $ResourceGroupName -Name $WebAppName -Format WebDeploy -Verbose
if ($profile){
([xml] $profile).publishData.publishProfile | select publishMethod, publishurl, username, userPWD
}
else{
throw "There was a problem with your publishprofile, check your webapp"
}
}
catch{
Write-Output "Cannot add webapp : $WebAppName"
Write-Output $_
}
return $WebApp
}
###############################################
##Step1: Define the Variables
$ServicePlanParams= @{
ResourceGroupName = "resourcegroup"
Location = "NorthEurope"
AppServicePlanName = "AppServicePlanName"
AppServicePlanTier = "Basic"
AppServicePlanWorkerSize = "Small"
AppServicePlanNumberofWorkers =3
}
$WebAppPlanParams= @{
WebAppName = "wineAppDemo20170809"
ResourceGroupName = "resourcegroup"
Location = "NorthEurope"
TimeStamp = Get-Date -Format ddMMyyyy_hhmmss
PublishProfilePath = Join-Path -Path $ENV:Temp -ChildPath "publishprofile$TimeStamp.xml"
AppServiceUse32BitWorkerProcess=$true
}
$AzureAccountParams= @{
AzureTenantId=$AzureTenantId
AzureServicePrincipalName=$AzureServicePrincipalName
AzureSPNPassword=$AzureSPNPassword
}
Add-Account @AzureAccountParams
New-AzureAppServicePlan @ServicePlanParams
New-AzureWebApp @WebAppPlanParams

PowerShell Vaccine for the CyberAttack NotPetya

Ok, another cyber-attack…
Ransomware Petya utilises EternalBlue vulnerability [the same WannaCry used], targeting people who have not done the patch. EternalBlue, exploiting a vulnerability in Microsoft’s SMB protocol, and Microsoft has been published Security Bulletin. Find your patch here:
https://en.wikipedia.org/wiki/EternalBlue
https://technet.microsoft.com/en-us/library/security/ms17-010.aspx

Or, there is a quick fix:
As announced this morning on BBC website, there is a vaccine, not to kill, but at least stop the ransomware cyber-attack, so called: NotPetya/Petya/Petna/SortaPetya 🙂 Here is the PowerShell version to help you, just put the server names and enter the credentials to the prompt!

function Protect-Perfc {
    param ([Array]$ServerList, [PSCredential]$PSCredential)
    $scriptBlock= {

        function Set-PerfcFile {
            param ([string]$File )

            if (Test-Path -Path $File){
                Write-Output "Item exists"
            }
            else {
                Write-Output "Item does not exist. Creating"
                New-Item -Path $File -Force -ItemType File
            }
            Write-Output "Setting the item readonly property:"

            if (Get-ItemProperty $File -Name IsReadOnly)
            {
                Write-Output "Item is already readonly"
            }
            else {
                Write-Output "Item is not readonly, setting"
                Set-ItemProperty -Path $File -Name IsReadOnly -Value $true
            }
            Write-Output "File ready as readonly: "
            Get-ItemProperty  $file -Name IsReadonly

        }
        Set-PerfcFile "C:\Windows\perfc"

    }  

    $AccessList=@()
    $serverList| %{ if (Test-Wsman -ComputerName $_  -ErrorAction SilentlyContinue) {
            $AccessList+=$_
            write-output "WinRM is enabled $_ . Adding to the list."
            }
            else {
            write-output "WinRM is not enabled on $_ . Run 'winrm quickconfig' to enable "
            }
        }
    Write-Output "These servers will be protected"
    $AccessList

    Invoke-Command -ScriptBlock $ScriptBlock -Credential $Credential -ComputerName $AccessList
    Write-Output "Finished the protection process"
}
$Serverlist= @("computer1", "computer2")
$Credential = Get-Credential
Protect-Perfc -ServerList $Serverlist -Credentials $Credential

TeamCity running on Docker

One of the sessions at JaxLondon, Paul Stack mentioned they were running TeamCity on containers at HashiCorp. Because I am doing quite a number of trainings, demos, talks about Continuous Delivery, having the CI server/agents portable and containerised is a big win for me. After I saw JetBrains has the official docker image [for the server and the agents] at DockerHub, I decided to do it sooner, than later.
There are quite things I will cover to have a good touch on containers.


Step1: Setup:
I will use docker’s Mac Toolbox to create TeamCity server and agents. There will be 2 folders required on my host for TeamCity server: data folder, and logs folder to be introduced as volumes to the server container.



Step2: Creating VirtualBox VMs:

I have my Mac Toolbox installed on Mac. Why not Docker-for-Mac, purely I want to rely on VirtualBox to manage my machines, and keep environment variables for Virtualbox VMs.

mymac:~ demokritos$ docker-machine create --driver virtualbox teamcityserver
mymac:~ demokritos$ docker-machine start teamcityserver
Starting "teamcityserver"... (teamcityserver) Waiting for an IP...
Machine "teamcityserver" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses.
You may need to re-run the `docker-machine env` command.
mymac:~ demokritos$ docker-machine env teamcityserver
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/demokritos/.docker/machine/machines/teamcityserver" export DOCKER_MACHINE_NAME="teamcityserver"
# Run this command to configure your shell: # eval $(docker-machine env teamcityserver)
mymac:~ demokritos$ eval $(docker-machine env teamcityserver)


Step2: Create and share our volumes:

We need to create folders, give permission to our group [my user is in wheel group], and share folder with docker. You can see `stat $folder` to display the permissions.

mymac:~ demokritos$ sudo mkdir -p /opt/teamcity_server/logs
mymac:~ demokritos$ sudo mkdir -p /data/teamcity_server/datadir
mymac:~ demokritos$ sudo chmod g+rw /opt/teamcity_server/logs
mymac:~ demokritos$ sudo chmod g+rw /data/teamcity_server/datadir

And share on Docker preferences
2 Folders to share

This will avoid errors like :
docker: Error response from daemon: error while creating mount source path ‘/opt/teamcity_server/logs’: mkdir /opt/teamcity_server/logs: permission denied.


Step3: Run the docker:

sudo docker run -it --name teamcityserver \
-e TEAMCITY_SERVER_MEM_OPTS="-Xmx2g -XX:MaxPermSize=270m \
-XX:ReservedCodeCacheSize=350m"
-v /data/teamcity_server/datadir:/data/teamcity_server/datadir \
-v /opt/teamcity_server/logs:/opt/teamcity_server/logs \
-p 50004:8111 jetbrains/teamcity-server

If you get an error like :

docker: Error response from daemon: Conflict.
The container name "/teamcityserver" is already in use
by container 4143c2d13192b8020f066b13a2c033750b4ac1ac7d54e822a6b31a5f47489647.
You have to remove (or rename) that container to be able to reuse that name..

Then, if you can find them with “ps -aq”, you can remove them in your terminal, if not open a new one and remove it, i.e:

 docker rm 4143c2d13192 

There is a long discussion on moby’s github site, if you are interested in …

And TC server is ready to be configured… Next, we will set up the agents…

Installing Zabbix 3.2 on AWS Ubuntu 16.04

Hello,

I had a challenge, to get my Zabbix server up and running on AWS. This initial version is on bash scripts, next versions will be smarter… Zabbix version I will install is 3.2.

A. Setup:

  • Image: Ubuntu Server 16.04 LTS (HVM), SSD Volume Type – ami-a8d2d7ce
  • Type: t2.micro
  • Storage: 8 gig
  • Tag: Name = Zabbix
  • Security group:  SSH [TCP/22], Http[TCP/80] and Http[TCP/10050] for access from anywhere.

B. Installations for Zabbix Server:
#Get the updated repos and install LAMP server. Notice the ^.

$ sudo apt-get update
$ sudo apt-get install lamp-server^

Note the password for mysql as to be used later on :

$ sudo service apache2 restart
$ sudo systemctl enable apache2
$ wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb
$ dpkg -i zabbix-release_3.2-1+xenial_all.deb
$ apt-get update
$ sudo apt-get install zabbix-server-mysql zabbix-frontend-php
$ sudo service mysql start

To secure our sql, we need configure options. Say No to change password, Yes to the rest of others questions.

$ sudo mysql_secure_installation

We will create the database zabbix and set a new password. Keep the quotation marks. Notice, we will use the to connect mysql.

$ mysql -uroot -p 
mysql> create database zabbix character set utf8 collate utf8_bin;
mysql> grant all privileges on zabbix.* to zabbix@localhost identified by ''; mysql> quit;

We need to restore the zabbix database onto the one we created. It will prompt you to enter the  to connect the zabbix database.

$ cat /usr/share/doc/zabbix-server-mysql/create.sql.gz |
 mysql -uzabbix -p zabbix

We also need to keep the password in zabbix server configuration:

$ sudo vi /etc/zabbix/zabbix_server.conf
>DBHost=localhost
>DBName=zabbix
>DBUser=zabbix
>DBPassword=‘’
$ sudo service zabbix-server start
$ sudo update-rc.d zabbix-server enable

Change /etc/zabbix/apache.conf, uncomment the php_value for date.timezone to your relevant timezone.

$ sudo vi /etc/zabbix/apache.conf
>php_value date.timezone Europe/London

Restart the apache server:

$ service apache2 restart

Browse your http:///zabbix :
Untitled 16

Note1: If you get errors on the page:
Error1:

PHP bcmath extension missing (PHP configuration parameter --enable-bcmath).
PHP mbstring extension missing (PHP configuration parameter --enable-mbstring).
PHP xmlwriter extension missing.
PHP xmlreader extension missing.</span>

Run on the server:

$ sudo apt-get install php-bcmath
$ sudo apt-get install php-mbstring
$ sudo apt-get install php-xml

Error2:
Zabbix discoverer processes more than 75% busy
Solution:

$ sudo vi zabbix_server.conf
sudo service zabbix-server restart
sudo service apache2 restart 

Error3:
Lack of free swap space on Zabbix server

sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048
sudo chmod 600 /var/swapfile
sudo mkswap /var/swapfile
echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab
sudo swapon -a 

C. Add agents to Centos/Ubuntu machines :
#Installing Zabbix agent on Ubuntu 16.04:

sudo wget http://repo.zabbix.com/zabbix/3.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.0-1+xenial_all.deb
sudo dpkg -i zabbix-release_3.0-1+xenial_all.deb
sudo apt-get update
sudo apt-get install zabbix-agent
sudo service zabbix-agent start

Installing Zabbix agent on Centos 7.3:

sudo rpm -ivh http://repo.zabbix.com/zabbix/3.0/rhel/7/x86_64/zabbix-release-3.0-1.el7.noarch.rpm
sudo yum update
sudo yum install zabbix-agent
sudo service zabbix-agent start

Error4:
Agent is not starting on Centos 7.3, Permission denied:

Investigations: 

# tail -3 /var/log/zabbix/zabbix_agentd.log
...
$ cat /var/log/audit/audit.log | grep zabbix_agentd | grep denied | tail -1
type=AVC msg=audit(1494325619.250:1410): avc: denied{ setrlimit } forpid=26242 comm="zabbix_agentd" scontext=system_u:system_r:zabbix_agent_t:s0 tcontext=system_u:system_r:zabbix_agent_t:s0 tclass=process

Solution :
Get the required policy and apply the output displayed:

 sudo cat /var/log/audit/audit.log | grep zabbix_agentd | grep denied | tail -1 | sudo audit2allow -M zabbix_agent_setrlimit
******************** IMPORTANT ***********************
To make this policy package active, execute:
semodule -i zabbix_agent_setrlimit.pp</span></pre>
# sudo semodule -i zabbix_agent_setrlimit.pp
# sudo systemctl daemon-reload
# sudo systemctl start zabbix-agent

JaxDevops 2017

I had the chance to attend JaxDevOps London, here is a valuable session from Daniel Bryant about the common mistakes done for Microservices…

  1.  7 (MORE) DEADLY SINS:
    1. Lust [Use the Unevaluated Latest and Greatest Tech]:
      1. Be an expert on Evaluation
      2. Spine Model: Going up the spine solves the problems, not the first step: Tools, but Practices, Principles, Values, Needs.
    2. Gluttony: Communication Lock-In
      1. Don’t rule out RPC [eg. GRPC]
      2. Stick to the Principle of Least Surprise: [Json over Https]
      3. Don’t let API Gateway murphing into EBS
      4. Check the cool tools: Mulesoft,Kong, Apigee, AWS API Gateway
    3. Greed: What Is Mine [within the Org]
      1. “We’ve decided to reform our teams around squads, chapters, and Guilds”:  Be aware of Cargo-Culting:
    4. Sloth: Getting Lazy with NFR:
      1. Ilities: “Availability, Scalability, Auditability, Testability” can be Afterthought
      2. Security: Aaron Grattafiori DockerCon2016 Talk/InfoQ
      3. Thoughtworks: AppSec & Microservices
      4. Build Pipeline:
        1. Perfromance and load testing:
          1. Gatling/JMeter
          2. Flood.IO [upload Gatling script/scale]
        2. Security Testing:
          1. FindSecBugs/OWasp dependency check
          2. Bdd-Security (Owasp Zap)/ Arachi
          3. Gaunltl /Serverspec
          4. Docker Bench for security/Clair
    5. Wrath: Blowing Up When Bad Things Happen
      1. Michael Nyard (Release It) : Turn ops to Simian Army
      2. Distributed Transactions:
        1. Don’t push transactional scope into Single Service
        2. Supervisor/Processor Manager: Erlang OTP, Akka, EIP
      3. Focus on What Matters:
        1. CI/CD
        2. Mechanical Sympathy
        3. Logging
        4. Monitoring
      4. Consider:
        1. DEIS
        2. CloudFoundry
        3. OpenShift
    6. Envy: The Shared Single Domain and (Data Store) Fallacy
      1. Know your DD:
        1. Entities
        2. Value Objects
        3. Aggregates and Roots
        4. Book:
          1. Implementing Domain-Driven Design
          2. Domain-Driven Distilled [high level]
            1. Context Mapping [Static] & Event Storming [Dynamic]
              1. infoq
              2. ziobrando
            2. Data Stores:
              1. RDBMS:
              2. Cassandra
              3. Graph -> Neo4J, Titan
              4. Support! Op Overhead
    7. Pride: Testing in the World
      1. Testing Strategies in a Microservice Architecture [Martin Fowler]
      2. Andew Morgan [Virtual API Service Testing]
      3. Service Virtualisation:
        1. Classic Ones:
          1. CA Service Virtualization
          2. Parasoft Virtualize
          3. HPE Service Virtualization
          4. IBM Test Virtualization Server
        2. New kids:
          1. [SpectoLabs] Hoverfly: Lightweight
            1. Fault Injection
            2. Chaos Monkey
          2. Wiremock
          3. VCR/BetaMax
          4. MounteBank
          5. Mirage

 

 

 

 

 

 

 

 

Getting latest workspace…

Getting the latest code from all workspaces can be time consuming, forgetting to do so can cause bigger issues…

So, here is the remedy:

There is a hard coded “d” drive to change the drive and navigate to the code source folders. If your code is on c drive you can just remove it…

************************************************

get latest

************************************************

I think it will be handy if it has more error handling as a report at the end, but for a quick solution, it is available…

Left somewhere up and high

If you can’t find the accountable and responsible people defined, you may find people complaining about

– the number of the emails they receive

– repeating same mistakes

– no central location for documents

simply because you are not learning from your mistakes.

Accountable people normally pay the bill of a mistake, and make sure they are not going to pay it twice. If a process is not in place, people get confused, but if they are really nice people, they will try to keep the system going without looking for an accountable.

Responsible people make sure that next time they have a smart way of doing it, at least by not repeating the same mistakes…

No accountable? No bill?
No responsible? No assigned people?

Some people enjoy this as this is an opportunity that they learn a “different task” each time. The environment is so slippy that they can be “x manager” this week, “y manager” next week. Do they know what “x manager” should do/know/implement? No…
Has their task finished? Yes, not the best one, but they managed to get it working [with silly hours of work]

So, because it is working, some people enjoy; the people suffer plays the nice guy…  because they are all nice…
If they want to object/comment/suggest, they are left somewhere up and high …

Business Model Canvas

 

I have ideas. Many of them actually, every day. Especially when going to sleep and just before waking up I have loads of them, so I write them down in my RTM. Every month or so I clean up this list of ideas and edit, delete, and merge them according my feelings. Some ideas make it through several of these inconsequent selections. Those ideas are the ones I’d like to develop further – inventing a business model for them to make it work. So I start writing up an executive summary of maximum 2 pages according Guy Kawasaki’s blog post and there you go; another idea that needs a business model, team, thinking, investment, etcetera.

At this point I somehow can’t seem to take things forward; shipping it. So far, I have read many management books, “VC recommendations”, and blogs about how to make your business model sustainable, to somehow “fit” into the market you want to conquer. However, all these books don’t do it for me – they all provide too much text (is a 220 page guide still helpful?) and rules of “what to do” (based on the past) and not “how to do it” (better = sustainable). I was missing a strong framework that forces me to make sense of all my loose thoughts, while focusing on the business model itself and the future, learning from past success formulas and proven strategies (we all learn from the past), but without holding on to them too much.

Since last week, I’ve been reading up on Business Model Generation by Alexander Osterwalder (72 page preview here). This book is awesome! I was introduced to Alex by my friend Anne McCrossan about a year ago in regards to Somesso, but I didn’t get the chance to read this book until last week. Alex is a Swiss entrepreneur who teaches systematic approaches to business model innovation. The book is innovative on its own as it’s co-created by 470 other experts (not just by anyone – participants had to pay to join the dialogue). How’s that for innovation?!

This book is really easy to digest and fits well into my “low information diet”, which I wrote about earlier. In short and overseeable sections it provides an overview of the learnings from proven strategies and concepts like “blue oceans” (W. Chan Kim and Renée Mauborgne), “the long tail” and “FREE” (Chris Anderson), multi-sided platforms and open business models. Also the business model canvas is introduced (see below), which is indeed very handy. Thanks Alex and the 470 others who helped publishing this great guide! ”

Post is by Arjen, click here if you want to read the post from the original url.

Stakeholder Engagement

Stakeholder engagement is an important process to be carried through any project. The involvement and engagement can add value and increase the life of the projects that goes live.

Prince2 [Project Management Framework] suggests a good framework for the process:

1. Identification : Know your target people. Who is going to be affected by this project?
2. Analysis of Profiles: This creates inclusive environment where stakeholders’ points of views, influence power, conflicts, interests and tradeoffs can be elicitated. We can divide this group into 4 in the most basic form:
a. Support or oppose the project
b. Gain or lose as a result of the project delivery
c. See the project as a threat or enhancement to their position
d. Become active supporters or blockers of the project/its progress.
3. Defining strategy: The communication stragety will be defined:
a. For each profile, the method, format and frequency of the communication
b. The message sender and receipent are decided
c. What information will be communicated ?
4. Planning strategy: With the correct communicator, the negotiations’ timing and method will be planned.
5. Engaging stakeholders  ( Negotiations and Partnership): Carry out the plan.
6.Checking effectiveness (Monitoring): What are the results?