Deploying deep learning models with Docker and Kubernetes

  • Published on

  • View

  • Download


<ul><li><p>Deploying deep learning models Platform agnostic approach for production with docker+Kubernetes</p><p></p></li><li><p>General Architecture</p><p></p></li><li><p>DOCKER</p><p></p><p>e.g. ASUS ESC8000 G3 local Server 'Lock-in'less</p><p>Cloud Service</p><p>For inference, i.e.process customer</p><p>queries via API</p><p>EXACTLY THE SAME MODELBoth locally at the office and in the cloud</p><p></p><p></p></li><li><p>DOCKER Deep learning?</p><p>Unfortunately that is wrong for deep learning applications. For any serious deep learning application, you need NVIDIA graphics cards, otherwise it could take months to train your models. NVIDIA requires both the host driver and the docker image's driver to be exactly the same. If the version is off by a minor number, you will not be able to use the NVIDIA card, it will refuse to run. I don't know how much of the binary code changes between minor versions, but I would rather have the card try to run instructions and get a segmentation fault then die because of a version mismatch.</p><p>We build our docker images based off the NVIDIA card and driver along with the software needed. We essentially have the same docker image for each driver version. To help stay manage this, we have a test platform that makes sure all of our code runs on all the different docker images.</p><p>This issue is mostly in NVIDIA's court, they can modify their drivers to be able to work across different versions. I'm not sure if there is anything that Docker can do on their side. I think its something they should figure out though, the combination of docker and deep learning could help a lot more people get started faster, but right now its an empty promise.</p><p></p><p>The biggest impact on data science right now is not coming from a new algorithm or statistical method. Its coming fromDockercontainers. Containers solve a bunch of tough problems simultaneously: they make it easy to use libraries with complicated setups; they make your output reproducible; they make it easier to share your work; and they can take the pain out of the Python data science stack.</p><p>The wonderful triad of Docker : Isolation! Portability! Repeatability! There are numerous use cases where Docker might just be what you need, be it Data Analytics, Machine Learning or AI</p><p></p></li><li><p>DOCKERize everything as microservices</p><p></p><p></p><p>(ARC401) Cloud First: New Architecture for New InfrastructureAmazon Web Services,</p><p>;utm_medium=sssite&amp;utm_source=ssslideview</p></li><li><p>Why Microservices?Why run microservices using Docker and Kubernetes?Posted by:Seth LakowskePublished:2016-04-25</p><p>Benefits of microservices1) Code can be broken out into smaller microservices that are easier to learn, release </p><p>and update.2) Individual microservices can be written using the best tools for the job.3) Releasing a new service doesn't require synchronization across a whole company.4) New technology stacks have lower risk since the service is relatively small.5) Developers can run containers locally, rebuilding and verifying after each commit on a </p><p>system that mirrors production.6) Both Docker and Kubernetes are open source and free to use.7) Access to Docker hub leverages the work of the opensource community.8) Service isolation without the heavyweight VM. Adding a service to a server does not </p><p>affect other services on the server.9) Services can be more easily run on a large cluster of nodes making it more reliable.10) Some clients will only host in private and not on public clouds.11) Lends itself to immutable infrastructure, so services are reloadable without missing </p><p>state when a server goes down.12) Immutable containers improve security since data can only be mutated in specified </p><p>volumes, root kits often can't be installed even if the system is penetrated.13) Increasing support for new hardware, like the GPU in a container means even gpgpu </p><p>tasks like deep learning can be containerized.14) There is a cost for running microservices - the build and runtime becomes more </p><p>complex. This is part of the price to pay and if you've made the right decision in your context, then benefits will exceed the costs.</p><p>Costs of microservices Managing multiple services tends to be more costly. New ways for network and servers to fail.</p><p>ConclusionIn the right circumstances, the benefits of microservices outweigh the extra cost of management.</p><p>, Frank Zhao</p><p></p><p></p></li><li><p>Docker vs AWS Lambda In General</p><p>AWS Lambdawill win - sort of..... From a programming model and a cost model, AWS Lambda is the future - despite so of the tooling limitations. Docker in my opinion is an evolutionary step of "virtualization" that we've been seeing for the last 10 years. AWS Lambda is a step-function. In fact, I personally think it is innovations like Amazon Elastic Beanstalk and CloudFormation that has pushed the demand solutions like Docker. In the near future, I predict that open source will catch up and provide an AWS Lambda experience on top of Docker containers. is opensource and appears to begoing down this path.</p><p>Florian Walker Product Manager at Fujitsu</p><p>The future is now :) Funktion, part of Fabric8, aims to provide a Lamda experience on-top of Kubernetes -&gt;</p><p>Jason Daniels CTO - Fujitsu Hybrid Cloud EMEIA</p><p>project Kratos ..</p><p></p><p>Funktionis an open source event driven lambda style programming model on top ofKubernetes. Afunktionis a regular function in any programming language bound to atriggerdeployed into Kubernetes. Then Kubernetes takes care of the rest (scaling, high availability, load balancing, logging and metrics etc).</p><p>Funktion supports hundreds of differenttrigger endpoint URLsincluding most network protocols, transports, databases, messaging systems, social networks, cloud services and SaaS offerings. In a sense funktion is aserverlessapproach to event driven microservices as you focus on just writingfunktionsand Kubernetes takes care of the rest. Its not that there's no servers; its more that you as the funktion developer don't have to worry about managing them.</p><p>Announcing Project Kratos Im happy to announce thatProject Kratosis now available in beta. is rolling out a set of tools that allow you to convert AWS Lambda functions into Docker images. Now, you can import existing Lambda functions and run them via any container orchestration system. You can also create new Lambda functions and quickly package them up in a container to run on other platforms. All three of the AWS runtimes are supported Node.js, Python and Java.</p><p></p></li><li><p>Docker Issues SizeDocker containers quickly grow in size as they need to contain everything required for deployment</p><p></p><p></p><p>Docker images can get really big. Many are over 1G in size. How do they get so big? Do they really need to be this big? Can we make them smaller without sacrificing functionality?</p><p>Here at CenturyLink we've spent a lot of time recently building differentdocker images. As we began experimenting with image creation one of the things we discovered was that our custom images were ballooning in size pretty quickly (it wasn't uncommon to end up with images that weighed-in at 1GB or more). Now, it's not too big a deal to have a couple gigs worth of images sitting on your local system, but it becomes a bit of pain as soon as you start pushing/pulling these images across the network on a regular basis.</p><p></p><p>Theres been awelcome focusin the Docker community recently around image size. Smaller image sizes are being championed byDockerand by thecommunity. When many images clock in at multi-100 MB and ship with a large ubuntu base, its greatly needed.</p><p></p><p></p><p>ImageLayers.iois a project maintained byMicroscaling Systemssince September 2016. The project was developed by the team atCenturyLink Labs. This utility provides a browser-based visualization of user-specified Docker Images and their layers. This visualization provides key information on the composition of a Docker Image and anycommonalities between them. allows Docker users to easily discover best practices for image construction, and aid in determining which images are most appropriate for their specific use cases.</p><p>Deploying in Kubernetes Please see deployment/</p><p>,golang:latest,node:latest,python:latest,php:latest,ruby:latest</p></li><li><p>What is lambda architecture anyway?</p><p></p><p>The Lambda Architecture is an approach to building stream processing applications on top of MapReduce andStormor similar systems. This has proven to be a surprisingly popular idea, with a dedicatedwebsiteand anupcoming book.</p><p>The way this works is that an immutable sequence of records is captured and fed into a batch system and a stream processing system in parallel. You implement your transformation logic twice, once in the batch system and once in the stream processing system. You stitch together the results from both systems at query time to produce a complete answer. There are a lot of variations on this.</p><p>The Lambda Architecture is aimed at applications built around complex asynchronous transformations that need to run with low latency (say, a few seconds to a few hours). A good example would be a news recommendation system that needs to crawl various news sources, process and normalize all the input, and then index, rank, and store it for serving.</p><p>I like that the Lambda Architecture emphasizes retaining the input data unchanged. I think the discipline of modeling data transformation as a series of materialized stages from an original input has a lot of merit. I also like that this architecture highlights the problem of reprocessing data (processing input data over again to re-derive output).</p><p>The problem with the Lambda Architecture is that maintaining code that needs to produce the same result in two complex distributed systems is exactly as painful as it seems like it would be. I dont think this problem is fixable. Ultimately, even if you can avoid coding your application twice, the operational burden of running and debugging two systems is going to be very high. And any new abstraction can only provide the features supported by the intersection of the two systems. Worse, committing to this new uber-framework walls off the rich ecosystem of tools and languages that makes Hadoop so powerful (Hive, Pig, Crunch, Cascading, Oozie, etc).</p><p>Kappa Architecture is a simplification ofLambda Architecture. A Kappa Architecture system is like a Lambda Architecture system with the batch processing system removed. To replace batch processing, data is simply fed through the streaming system quickly.</p><p>Kappa Architecture revolutionizes database migrations and reorganizations: just delete your serving layer database and populate a new copy from the canonical store! Since there is no batch processing layer, only one set of code needs to be maintained.</p><p></p><p>CHALLENGING THE LAMBDA ARCHITECTURE: BUILDING APPS FOR FAST DATA WITH VOLTDB V5.0dataconomy.comVoltDB is an ideal alternative to the Lambda Architectures speed layer. It offers horizontal scaling and high per-machine throughput. It can easily ingest and process millions of tuples per second with redundancy, while using fewer resources than alternative solutions. VoltDB requires an order of magnitude fewer nodes to achieve the scale and speed of the Lambda speed layer. As a benefit, substantially smaller clusters are cheaper to build and run, and easier to manage.</p><p></p></li></ul>