Bringing Valuable Information to DevOps Professionals

DevOps Journal

Subscribe to DevOps Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get DevOps Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


DevOpsJournal Authors: Automic Blog, Elizabeth White, Dalibor Siroky, Mehdi Daoudi, XebiaLabs Blog

Related Topics: DevOps Journal

DevOpsJournal: Article

Docker & Microservices at Scale

Docker and Microservices

The original post can be found on the Electric Cloud blog.

In a recent Continuous Discussions (#c9d9) video podcast, expert panelists discussed Docker and microservices at scale.

Our expert panel included: Andreas Grabner, Technology Strategist at Dynatrace; Chris Haddad, Chief Architect at Karux LLC; Chris Riley, Analyst at Fixate.io; Esko Luontola, Programmer & Interaction Designer; Phil Dougherty, CEO at ContainerShip; and, our very own Anders Wallgren and Sam Fell.

During the episode, the panelists discussed the benefits of microservices and containers, some of their challenges, and best practices for building, deploying and operating microservices on a large-scale Docker-ized infrastructure. Continue reading for their insights!

Docker and Microservices – Why?

 

Benefit:  enable you to point at a run time executable and trace it back to specific  tags @cobiacomm

 

If you don't do  architecturally,  can be a cost-effective way to scale these types of deployments 

 

benefits: Maintenance time down, less time for  which means more time to innovate - Andreas Grabner  @Dynatrace

 

Dougherty talked about microservices and Docker compatibility, “When you’re breaking down a monolith into small, composable services, Docker is going to come in handy. Why? Because you might have many different teams that are all working on individual services. Being able to package them up and share them amongst the group, the development team, compose them together, and be able to work on them easily is extremely important. If you’re going to do these constant deployments of smaller services instead of, every six months pushing out this giant monolith, you want to have a way to do that easily and have immutable artifacts that you can easily push out and deploy. So, they really kind of go hand-in-hand to, you know, making microservices easy to push out.”

Fell added to Doherty’s comment, “The immutability factor, I think that’s one that a lot of people would agree with. It’s important if you want that parity or that fidelity across the pipeline. The ‘It worked in my environment’ sort of argument largely goes away.”

Grabner chimed in on microservices and Docker, “Talking about pipeline speeds, we use Docker heavily for testing, for speeding up our pipeline — by parallel executing tests, by being able to test individual services, isolation, very often with every check in. I think this is just great, shifting left, finding problems early on. And I think Docker is a great enabler. If you build a microservice architecture, yes or no, that obviously then brings another benefit. But I believe Docker itself is already very beneficial in that case.”

When it comes to synergy between Docker and microservices, Haddad adds, “If you’re really going to transform your digital business, you need to actually take a top-down approach. Define out your domains, and then containers enable you to point at a runtime executable, and even better yet, trace it back to specific git tags to say that I am rapidly incorporating new features into this domain object that’s placed in a container. That’s where the intersection is. You can conflate the two or keep them separate. Containers don’t equal microservices, but there’s ways to piece it together, where there’s a natural peanut butter and jelly or synergy between the two.”

Riley added insight into the cultural aspect, “It also comes down to culture, teams working together, and not having to worry about everybody being exactly in sync. Most Agile environments are just a really fast waterfall. Maybe one out of 10 Agile environments I’ve seen is actually Agile. So, as we move up the chain, you know, you can’t be predicated on everybody getting their work in on time. So, the microservices especially, allow you to do some of the cool stuff that we’ve all talked about but haven’t yet fully actualized.”

Luontola focused on the impact on the development side, “If your system is a database or some other external data dependence, just one command, you have the dev environment running –it’s like, okay, here are some other teams that produce applications if they run on Docker. But even in these simple projects, where you just have a database and stuff. Even there, Docker is a benefit. And I also have some open source projects that I’m maintaining, and in one of them, it’s called Retrolambda. I run the tests against Java 5, 6, 7, 8, and 9. So, it’s nice when I can just have the one container that has all of the environment set up.”

Highlighting the cost benefits of containers, Wallgren adds, “It’s almost a necessary response, in some ways, to get the overhead of running a service down on a given piece of hardware. Once you get down to it, there’s still a CPU at the end of that somewhere. No matter how serverless you are, you’re not CPU-less. I’m hanging on to that. If you don’t do it architecturally, if you start doing microservices, then at some point your CFO or CMO is going to walk into your office and say, ‘Hey, how come we have all these really expensive VMs? Can we use something cheaper?’ Containers, you know, really is the only way to go a lot cheaper in the deployment footprint for that. From the cost or resource utilization perspective, that’s a pretty big one, too. At a scale, obviously.”


Challenges – Testing, Security, Monitoring..

When you have a proliferation of  you have to be ready to supply  and monitoring at scale @cobiacomm

 

 

 

Don't start with , start with the monolith and one code base and then split things up from there @orfjackal

 

Grabner gives a personal account of monitoring, “I think monitoring has been seeing a big change for us, obviously. Not only the way we monitor Docker container, how we get in there, but also what we do with the data and how we understand dependencies between the containers. Not only from the physical perspective, where they live and how they’re needing each other’s resources, but also the services that live in there, how they communicate end-to-end and how they, in the end, impact what is most important — the end user who is using the service base. That’s the problem that we try to solve.”

Haddad also taps into personal experience, “When you have a proliferation of containers, you have to be ready to apply testing, security, and monitoring at scale. It’s really easy to point to one server and one location, or one VM and one location and say, ‘I want to monitor that one thing, and I’m going to reach out and pull that machine for all the information, all the log files.’ One client, their mind just blew up. They thought it was the zombie apocalypse because their traditional monitoring tool was based on a reach out and pull, pull mentality. And they couldn’t get their heads around that, well, you don’t know where these containers are. You don’t know how many they are, so you need to push out the logs to a central aggregator service.”

Riley added to the monitoring conversation, “When you start deploying services across your application and you deploy the same service in different regions, even if you only have a handful of services per, the problem is, it gets big, but it’s not a new problem. This is not new at all. I mean, it becomes a big deal because of scale and volume, but we have the tools to solve this. And I think one of the things that some organizations are falling in the trap of is expecting their tools that they acquire, their monitoring tools just to suddenly deliver magic, and I think that’s part of the risk of using the term AI…You don’t build dashboards just to build dashboards. You build dashboards to consume it in some way. And to some respects, this is an information architecture problem that starts at your private repo. I think organizations that complain about monitoring are all the same organizations who have snowflake configurations and snowflake images, and don’t trade their containers as immutable. Visibility starts very early on. So that’s my point. I think this is a solvable solution. It’s a big deal. I know of three brand new vendors that are doing container native, microservices native monitoring. So you know the market’s out there.”

Luontola offers advice on testing, “One of the challenges about microservices is that, as the general advice from at least one or two years ago was don’t start with microservices. So, start with a monolith, and then, it’s so much faster and easier to develop when you have, let’s say, a maximum 10 people and working on the same code base. Then start to speed up things and you will need to start making all these monitoring and network stuff, and all the retries. Advice I’ve heard is to test how resilient your system is to all these failures and so on, make it so that it randomly duplicates messages or drops messages, and your system should survive that with no problems.

Dougherty adds insight into the importance of security, “The other thing is security when it comes to layers of your containers being… Luckily, we have a lot of startups that have come out, and Docker itself is doing a lot of work around this, when it comes to trusted registry stuff and being able to ensure that we have consistency in the layers of our containers. Because, people are going out and they’re picking a base image, and they’re building all their stuff off of it, but, something can happen upstream from them that poisons everything they’ve done. So, we’re getting a lot of benefits when it comes to deployments and scalability and putting more power in the hands of developers.”

Referencing the shift left, Wallgren adds, “We have to sort of shift left also, not just as we were talking about earlier, kind of monitoring that we do, but deployments. If the first time you’re exercising your deployment functionality is when you go into production, that’s not as prevalent with containers because good luck doing a deployment of containers manually, right? I mean, most of us are using some form of automation. But what is the fidelity of that process vis-à-vis the fidelity when you go into production. I mean, is it different? When something breaks while you’re testing, do you deal with it the same way when it breaks in production? If you don’t, why not? And I’m sure there are legitimate reasons to treat them differently, but, you know, you should know them and understand them.”


Best Practices for Docker and Microservices at Scale

Your first day should not be, what are our first 50  going to be? Focus on product first @anders_wallgren

 

.@cobiacomm best practice: Give your organization the time and experts to train your team

More Stories By Anders Wallgren

Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products.