Running A Load Testing Go Utility Using Docker For Mac

Estimated Reading Time: 7 minutes Do you want to learn Docker FOR FREE OF COST? Yes, you read it correct. Thanks to a playground called “” – PWD in short.

PWD is a website which allows you to create 5 instances to play around with Docker & Docker Swarm Mode for 4 hours – all for $0 cost. It is a perfect tool for demos, Meetups, beginners & advanced level training.

I tested it on number of web browsers like Chrome, Safari, Firefox and IE and it just works flawlessly. You don’t need to install Docker as it comes by default.

It’s ready-to-use platform. Currently, PWD is hosted on AWS instance type 2x r3.4xlarge having 16 cores and 120GB of RAM.

It comes with the latest Docker 17.05 release, docker-compose 1.11.1 & docker-machine 0.9.0-rc1. You can setup your own PWD environment in your lab using repository.

Credits to Docker Captain – Marcos Nils & Jonathan Leibuisky for building this amazing tool for Docker Community. But one of the most interesting fact about PWD is its based on DIND (Docker-in-a-Docker) concept. When you are playing around with PWD instances & building application stack, you are actually inside Docker container itself. Interesting, isn’t it? PWD gives you an amazing experience of having a free Alpine Linux 3.5 Virtual Machine in the cloud where you can build and run Docker containers and even create Multi-Node Swarm Mode Cluster.

Said that, PWD is NOT just a platform for beginners. Today, it has matured enough to run sophisticated application stack on top of it. Within seconds of time, you can setup Swarm Mode cluster running application stack.Please remember that PWD is just for trying out new stuffs with Docker and its application and NOT to be used for production environment. The instances will vanish after 4:00 hours automatically. Estimated Reading Time: 4 minutes Docker, Inc announced initial support for volume driver plugins for the first time under Docker 1.8 release. Since then, there has been subtle changes in terms of its volume plugin architecture. With the new Docker 17.03 Volume plugin architecture, writing your own Volume Plugin is quite simplified.

Old Legacy Docker Volume Plugin Specification 1.12 With Docker 17.03, the new Volume Plugin Spec has been revamped. The new specification extends the standardization and talks about plugin packaging as a Docker Image.

What it really mean is now you can now convert your extension/plugin into a Docker image which you can publish on Dockerhub. Interesting, isn’t it? In simple statement, now it is possible to publish Volume plugin in the form of Docker image which anyone can discover, install flawlessly onto their system and easy to configure & manage.New Docker Volume Plugins enable Engine deployments to be integrated with external storage systems such as Amazon EBS, and enable data volumes to persist beyond the lifetime of a single Docker host. Before we build, store, install and manage the plugin, we need to go deeper in understanding the newer Docker Volume API design. Understanding Docker Volume API Design: As per the official Docker Volume Plugin page “The new Plugin API is RPC-style JSON over HTTP, much like webhooks.Requests flow from the Docker daemon to the plugin. So the plugin needs to implement an HTTP server and bind this to the UNIX socket mentioned in the “plugin discovery” section. All requests are HTTP POST requests.The API is versioned via an Accept header, which currently is always set to application/vnd.docker.plugins.v1+json.” How Docker Volume Orchestration Works?

Playing around with RexRay Volume Plugin: In my blog post, I talked about RexRay as a Volume Plugin. Let us look at various CLIs which can be used to play around with this plugin:. Listing the RexRay Volume Plugin:. Disabling or Enabling the RexRay Volume Plugin:. Verify that “Enabled=true” value gets listed once plugin is enabled back:. If Volume Plugin is in the form of Docker Image, then there should be a way to enter into this container.

Yes,it is possible. You can enter into a shell of RexRay Volume Plugin using docker-runc command. Checking the Plugin logs:.

It’s time to use this plugin and create volume for your application:. Inspecting the volume: I hope you found this blog useful.In my future blog post, I will talk further on how Volume Plugin Orchestration works in terms of Swarm Mode cluster.

Updated: Categories:,. Estimated Reading Time: 4 minutes Go programming language has really helped in shaping Docker as a powerful software and enabling fast development for distributed systems. It has been helping developers and operations team to quickly construct programs and tools for Cloud computing environment. Go offers built-in support for JSON (JavaScript Object Notation) encoding and decoding, including to and from built-in and custom data types. In last 1 year, Docker Swarm Mode has matured enough to become production-ready. The Orchestration platform is quite stable with numerous features like Logging, Secrets Management, Security Scanning, improvement over Scheduling, networking etc.

Making it more simple to use and scale-out in just few liner commands. With the introduction of new APIs like swarm, node, volume plugins, services etc., Swarm Mode brings dozens of features to control every aspect of swarm cluster sitting on the master node. But when you start building services in the range of 100s & 1000s and that too distributed across another 100s and 1000s of nodes, there arise a need of quick and handy way of filtering the output, especially when you are interested to capture one specific data out of the whole cluster-wide configuration. Here comes ‘a filtering flag’ as a rescue. The filtering flag ( -f or -filter) format is a key=value pair which is actually a very powerful weapon for developers & system administrators.If you have ever played around with Docker CLI, you must have used docker inspect command to get metadata on a container/ image.

Using it with -f provides you more specific information like IP address, network etc. There are numerous guide on how to use filters with standalone host holding the Docker images but I found lack of guides talking about Swarm Mode filters. Under this blog post, I have prepared a quick list of consolidated filtering commands and outputs in tabular format for Swarm Mode Cluster(shown below). I have 3 node Swarm Mode cluster running on one of my Google Cloud Engine. Estimated Reading Time: 6 minutes Apache JMeter is a popular open source software used as a load testing tool for analyzing and measuring the performance of web application or multitude of services. It is 100% pure Java application, designed to test performance both on static and dynamic resources and web dynamic applications.

It simulates a heavy load on one or multitude of servers, network or object to test its strength/capability or to analyze overall performance under different load types. The various applications/server/protocol includes – HTTP, HTTPS (Java, NodeJS, PHP, ASP.NET), SOAP / REST Web services, FTP,Database via JDBC, LDAP, Message-oriented middleware (MOM) via JMS,Mail – SMTP(S), POP3(S) and IMAP(S), native commands or shell scripts, TCP, Java Objects and many more.

JMeter is extremely easy to use and doesn’t take time to get familiar with it.It allows concurrent and simultaneous sampling of different functions by a separate thread group. JMeter is highly extensible which means you can write your own tests. JMeter also supports visualization plugins allow you extend your testing.JMeter supports many testing strategies such as Load Testing, Distributed Testing, and Functional Testing. JMeter can simulate multiple users with concurrent threads, create a heavy load against web application under test. Target (System Under Test)– the web server which undergoes stress test. What Docker has to offer for Apache JMeter?

Good Question! With every new installation of Apache JMeter, you need to download/install JMeter on every new node participating in Distributed Load Testing. Installing JMeter requires dependencies to be installed first like JAVA, default-jre-headless, iputils etc. The complexity begins when you have multitude OS distributions running on your infrastructure. You can always use automation tools or scripts to enable this solution but again there is an extra cost of maintenance and skills required to troubleshoot with the logs when something goes wrong in the middle of the testing. With Docker, it is just a matter of 1 Docker Compose file and an one-liner command to get the entire infrastructure ready. Under this blog, I am going to show how a single Docker Compose v3.1 file format can help you setup the entire JMeter Distributed Load Testing tool – all working on Docker 17.03 Swarm Mode cluster.

I will leverage 4-node Docker 17.04 Swarm Cluster running on my Google Cloud Engine platform to showcase this solution as shown below: Under this setup, I will be using instance “master101” as master/client node while the rest of worker nodes as “server/slaves” nodes. All of these instances are running Ubuntu 17.04 with Docker 17.03 installed. You are free to choose any latest Linux distributions which supports Docker executables. First, let us ensure that you have the latest Docker 17.03 running on your machine: $curl -sSL sh Ensure that the latest Docker compose is installed which supports v3.0 file format: $curl -L -s`-`uname -m` /usr/local /bin/docker-compose $chmod +x /usr/local/bin/docker-compose Docker Compose for Apache JMeter Distributed Load Testing One can use docker stack deploy command to quickly setup JMeter Distributed Testing environment. This command requires docker-compose.yml file which holds the declaration of services (apache-jmeter-master and apache-jmeter-server) respectively. Let us clone the repository as shown below to get started – Run the below commands on the Swarm Master node – $git clone $cd jmeter-docker Under this directory, you will find docker-compose.yml file which is shown below: In the above docker-compose.yml, there are two service definitions – master and server. Through this docker compose file format, we can push master/client service definition to the master node and server specific service definition which has to be pushed to all the slave nodes.

The constraints sections takes care of this functionality. This compose file will automatically create an overlay network called jm-network across the cluster. Let us start the required services out of the docker-compose file as shown below: $sudo docker stack deploy -c docker-compose.yml myjmeter Checking if the services are up and running through docker service ls command: Let us verify if constraints worked correctly or not as shown below: As shown above, the constraints worked well pushing the containers to the right node. It’s time to enter into one of the container running on the master node as shown below: Using docker exec command, one can enter into the container and browse to /jmeter/apache-jmeter-3.1/bin directory. Pushing the JMX file into the container $docker exec -i sh -c 'cat /jmeter/apache-jmeter-3.1/bin/jmeter-docker.jmx' bash root@daf39e596b93:/#cd /jmeter/apache-jmeter-3.1/bin $./jmeter -n -t jmeter-docker.jmx -R. Estimated Reading Time: 4 minutes Are you still thinking whether or not to attend Dockercon 2017? Still finding it difficult to convince yourself or your boss/manager to allow you to attend this conference?

Then trust me, you have come to the right place. For the next 30 minutes, I will talk about the great sessions which you can’t miss to attend this year.

Dockercon 2017 is just 1 month away. Heavily power-packed with 3 keynotes( includes Solomon Hykes impressive talk), 7 tracks, 60+ breakout sessions, workshops, Ask the Experts, Birds-of-a-feather, Hands-on Lab, Ecosystem expo and lot more. This year DockerCon 2017 brings a three-day impressive event schedule in capital of the U.S.

Running a load testing go utility using docker for mac

State of Texas, Austin.Featuring topics, contents & workshops covering all aspects of Docker and it’s ecosystem,Dockercon has always given a chance to meet and talk to like-minded professionals, get familiar about the latest offerings, upcoming Docker releases & roadmap, best practices and solutions for building Docker based applications. Equally it has always provided opportunity to the community users to know what and how are they using Docker in their premises and in the Cloud. April Austin, TX DockerCon 2017 Dockercon 2017 is primarily targeted for Developers, DevOps, Ops, System Administrators, Product Manager and IT executives.

Whether you are Enablement Solution Architect for DevOps and containers, OR Technical Solution Architect; whether you are part of IoT Development Team OR AWS/Azure DevOps Engineer; whether you are Principal Product Engineer OR Product Marketing Manager, Dockercon is the place to be. Still wondering how would this conference help your organization in adopting containers and improving your offerings in terms of containerized application for your customer? I have categorized the list of topics based on the target audience. Hope it will help you gather data points to convince yourself and your boss. As a developer, you are a core piece of your organization, busy developing new versions of your flagship software meant to run your software in various platforms. You are responsible for developments leveraging the target containerized platform’s capabilities and adapting and maintaining release artifacts to deliver a compelling experience for your users.Below lists of sessions might help you to develop the better containerized software – As a Product Manager, you are actually CEO of your product and responsible for the strategy, roadmap, and feature definition for that product or product line.

You love to focus on the problems, not on the solutions. You are gifted to excel at getting prospects and customers to express their true needs. Below list of the sessions might interest you to attend: As a system administrator, you are the only person who is responsible for the uptime, performance, resources, security, configuration, and reliable operation of systems running Docker applications. Below sessions might interest you to manage your Dockerized environment in a better way – As a Solution Architect, you are always busy with definition and implementation of reference architectures, capturing business capabilities and transform them into services leveraged across the platform and not to miss out – designing infrastructures for critical applications and business processes in a cost effective manner. Below lists might interest you to shape your containerized solutions in a better way: Don’t you think attending Dockercon gonna be a great investment for you and your career?If yes, then what are you waiting for?

Docker Team has something really cool for you to get started – For more information, visit Updated: Categories:. It is a blob of data, such as password, SSH private keys, certificates,API keys, and encryption keys etc.In broader term, it can be anything that can be tightly control access to.The secrets-management capability is the latest security enhancement integrated into the Docker platform so as to ensure applications are safer in a containerized environment.This is going to benefit financial sector players who look for hybrid cloud security strategy. Why do we need Docker secrets? There has been numerous concerns over environmental variables which are being used to pass configuration & settings to the containers.Environmental variables are easily leaked when debugging and exposed into many places including child processes, hosting secrets on a server etc.

Consider a Docker compose file for WordPress application: wordpress: image: wordpressapp links: – mariadb:mysql environment: – WORDPRESSDBPASSWORD= ports: – “80:80” volumes: –./code:/code –./html:/var/www/html As shown above, environmental variables are insecure in nature because they are accessible by any process in the container, preserved in intermediate layers of an image, easily accessible through docker inspect and lastly, it can get shared with any container linked to the container. To overcome this, one can use secrets to manage any sensitive data which a container needs at runtime aand there is no need to store in the image. A given secret is only accessible to those services which have been granted explicit access to it, and only while those service tasks are running. How does it actually work? Docker secrets is currently supported for Swarm mode only starting Docker Engine 1.13.1.

If you are using Docker 1.12.x you might need to upgrade to the latest 1.13.x release to use this feature. To understand how secret works under Docker Swarm mode, you can follow the below process flow: Docker Compose v3.1 File Format now supports Secrets Docker compose file format v3.1 is available and requires Docker Engine 1.13.0+. It introduced support for secrets for the first time which means that now you can use secrets inside your docker-compose file.

Estimated Reading Time: 6 minutes Recently I purchased a brand new slim 13.3 inch Apple Mac Book Air with an amazing 1.6GHz dual-core Intel Core i5 processor. Introducing Siri to newly re-branded macOS for the first time along with dozens of new features, it came by default running macOS 10.12.1 Sierra. MacOS Sierra is the 13th major release of macOS(previously OS X)and successor of OS X El Capiton. ICYMI – Apple released macOS 10.12 Sierra open source Darwin code last November and can be accessed via.

One of the first thing I wanted to try out was to see how easy is it to bring Docker 1.13.0 up and running on this system. Trust me, it was an amazing experience. Under this blog post, I am going to share my experience with Docker 1.13.0 and what you really need to know about Docker for Mac on macOS Sierra.

In case you are too new to Docker For Mac Last year during March time frame, Docker announced and released a native beta support for Mac and Windows, rightly termed as “Docker for Mac” & “Docker for Windows” respectively. They started with a closed beta & provided access to a couple of early adopters. During Dockercon 2016, they announced Final GA release for both the platforms.

Docker for Mac Vs. Docker Toolbox Pre-Requisite:. Docker for Mac only works on OS X 10.11 or newer macOS release. Apple Mac must be running 2010 or newer Mac system, with Intel HW support for MMU Virtualization. Docker for Mac require atleast 4GB of RAM to function smoothly.

Currently the installer comes in the form of 2 channels – stable and Beta. Under the stable channel, the installer is fully tested and comes with the latest GA version of Docker Engine. The Experimental feature is enabled by default.Under Beta channel, the installer provides the latest beta release of Docker for Mac with experimental features enabled by default. Features enablement under Docker 1.13.0: Getting Started with Docker Engine 1.13.0 on macOS Sierra Installing Docker for Mac is one of fantastic experience I ever had installing any software.

Just 3 simple steps:. Download Docker for Mac by clicking on link and double-click Docker.dmg which opens up the installer. Drag Moby the whale to the Application Folder as shown below: 3.

Authorize Docker.app with your system password and double-click Docker.app to get started as shown below: Now this is really amazing. Docker for Mac comes with the default availability of docker-compose, docker-machine and experimental feature by default as shown below:. Verify the docker compose version: bash-3.2# docker-compose version docker-compose version 1.10.0, build 4bd6f1a docker-py version: 2.0.1 CPython version: 2.7.12 OpenSSL version: OpenSSL 1.0.2j 26 Sep 2016 2. Verify the docker-machine version: bash-3.2# docker-machine version docker-machine version 0.9.0, build 15fd4c7 Kudos to Docker Engineers, the whale in the status bar is all you need to have a glimpse of overall Docker daemon running and easy way to configure Docker preferences & environment as per your need: Open the terminal and you can see that Docker for Mac runs on top of Alpine Linux v3.5, default storage driver as overlay2, Plugins support and many more. Multi-CPU Architecture Support(ARM, AARCH64 & PPC64le): Apple Inc.

Added support for an ARM chip to the latest macOS Sierra 10.12 kernel. At the right time, Docker for Mac made binfmtmisc multi architecture support available,which means that now you can run containers for different Linux architectures, such as arm, aarch64 ppc64le and even s390x. I couldn’t wait to test-drive few of Raspberry Pi ARM-based Docker images on my Mac Book Air. Estimated Reading Time: 5 minutes Raspberry Pi 3 Model B is the first 64 bit version and the third generation Pi box which runs on 1.2GHz 64 bit quad-core ARMv8 CPU.(Broadcom BCM2837 A53 ARM processor). Despite its processor upgrade, there wasn’t an official 64-bit OS available for it till the first week of Jan 2017. Kudos to SUSE Team, they came up providing first commercial enterprise Linux distribution optimized for ARM AARCH64 servers.

Running A Load Testing Go Utility Using Docker For Mac

This is definitely a BIG news. Reason – To build a solution to meet specific market needs while maintaining a common code base. Enterprise vendors & customers demanding workload-optimized server platforms can now radically expand it for their modern data centers. In the last couple of months, Docker enthusiasts have been working hard to get Docker running on ARM 32bit systems (like Raspberry Pi). With Docker Engine 1.12.1, a FIRST ARM Debian package was officially made available by Docker Inc. Which happened late last year.

Running A Load Testing Go Utility Using Docker For Mac Download

This year, SUSE Team did a great job in bringing capabilities of SUSE Linux(a.k.a SLES for ARM) to the ARM AArch64 hardware platform. This is BIG news for Docker community too as more innovation and development is expected to grow building containers which will run across the AARCH64 platform. Under this blog, I am going to test drive Docker 1.12.3 on first 64-bit ARM Open SUSE distribution running on Raspberry Pi 3 box.