A client with a B2B video-processing AI platform hired us to help them bring their MVP to production. By working closely with the client we were able to deploy their data-intensive platform across three continents while maintaining compliance with strict data protection regulations and minimizing costs.
AWS DevOps Transformation
Privacy and Security
Our client needed strictly-separated infrastructure in China, Europe, and the US for their data-intensive platform in order to comply with the different data regulations for each country. Properly-designed infrastructure-as-code leveraging Terraform modules ensured that all of their infrastructure was perfectly replicated across regions, in addition to being secure, simple to manage, and easy to extend.
As part of our clients journey we evaluated and improved items that had previously been several pain points within the platform. Configuration (including secrets) either lived on the server itself in plaintext, or was hard-coded within the applications code. We were able to externalize all config, which is now stored version-controlled and SOPS-encrypted using AWS KMS keys. API access was previously controlled by hard-coded keys on the instance which needed to be manually updated. A new security framework was developed that leveraged AWS Cognito in conjunction with a third-party communications platform (Virgil) to ensure customer data was end-to-end encrypted and archived with zero-knowledge storage.
The original platform ran on a single VPS and was only able to process a few customer videos at a time. Artifacts on the instance were periodically cleaned by a cron. A single API server handled all customer queries and was easily overloaded. By working closely with the client we were successfully able to re-architect the application as a robust 12-factor application using an event-driven paradigm that allowed it to scale out and process hundreds of videos concurrently. The applications were containerized and deployed to ECS. Kafka was used as an event bus for GPU-backed consumers which processed the videos and stored encrypted artifacts in S3. The API server was rewritten as a stateless application leveraging Redis as a cache. This infrastructure is easy to maintain despite being deployed across multiple regions in China, Europe, and the United States.
By building and maintaining environments through custom Terraform modules we have the luxury of version-controlling the stack including but not limited to network, security groups, and IAM policies. This allowed for seamless, exact network duplication for on-demand development, staging, and production environments. The creation of environment resources is completely automated, so developers could focus on software rather than managing infrastructure.
Business Challenge Story
DevOps practices enabled development efficiency
Our DevOps practice philosophy is about continuous delivery: building, testing, and releasing far more frequently, and ensuring that our client can release reliably at any time. As a result of transforming their processes, everyone became an owner. Streamlining the DevOps process means teams are more autonomous. DevOps processes that are supported with Docker enable developers to produce self- contained applications that are delivered in a fast, repeatable way. Deploying Docker containers through terraform-managed ECS allows us to an infrastructure-as-code approach, making management and deployment transparent, seamless, and fast.
Using containers and an event-driven architecture together with automation allowed our client to hit their target date deploying across multiple countries. Micro-services are scalable and reusable, while containers supply efficient resource management. Both micro-services and containers can work independently, but it has become clear that merging them has improved runtime frequency, deployments and overall application efficiency.