ARTICLES
- Hyperscale Data Centers Ignite Construction Surge
Despite the lack of a universal definition for hyperscale data centers, many people view them as massive facilities that typically handle mission-critical workloads distributed across numerous servers. These buildings have collectively contributed to a data center construction boom, emphasizing the world’s substantial and growing dependence on technologies and the infrastructure supporting them. Emerging Technologies Spurring Data Center Construction When a February 2024 study examined the state of the global data center industry, the results revealed some of the primary drivers of the current construction boom. One finding was many of the largest hyperscale providers increasingly accommodate clients’ artificial intelligence applications and other computing-intensive requirements.
- Key Considerations for Effective AI/ML Deployments in Kubernetes
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. Kubernetes has become a cornerstone in modern infrastructure, particularly for deploying, scaling, and managing artificial intelligence and machine learning (AI/ML) workloads. As organizations increasingly rely on machine learning models for critical tasks like data processing, model training, and inference, Kubernetes offers the flexibility and scalability needed to manage these complex workloads efficiently. By leveraging Kubernetes' robust ecosystem, AI/ML workloads can be dynamically orchestrated, ensuring optimal resource utilization and high availability across cloud environments. This synergy between Kubernetes and AI/ML empowers organizations to deploy and scale their ML workloads with greater agility and reliability.
- Key Considerations for Effective AI/ML Deployments in Kubernetes
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. Kubernetes has become a cornerstone in modern infrastructure, particularly for deploying, scaling, and managing artificial intelligence and machine learning (AI/ML) workloads. As organizations increasingly rely on machine learning models for critical tasks like data processing, model training, and inference, Kubernetes offers the flexibility and scalability needed to manage these complex workloads efficiently. By leveraging Kubernetes' robust ecosystem, AI/ML workloads can be dynamically orchestrated, ensuring optimal resource utilization and high availability across cloud environments. This synergy between Kubernetes and AI/ML empowers organizations to deploy and scale their ML workloads with greater agility and reliability.
- AWS LetsEncrypt Lambda or Why I Wrote a Custom TLS Provider for AWS Using OpenTofu and Go
These days, it's challenging to imagine systems that have public API endpoints without TLS certificate protection. There are several ways to issue certificates: Paid wildcard certificates that can be bought from any big TLS provider Paid root certificates that sign all downstream certificates that are issued by corporate PKI systems Free certificates issued by TLS providers like LetsEncrypt or AWS Certificate Manager Self-signed certificates, issued by OpenSSL or another tool Within the context of this post, I will mainly discuss free certificates that can be used inside of AWS, but not only by AWS services. Clearly, using anything other than AWS Certificate Manager makes no sense if you exclusively use managed AWS services and don't have strict security requirements. AWS Certificate Manager offers a very convenient and speedy method of issuing certificates via DNS or HTTP challenges; however, you face basic AWS limitations if you need to use these certificates outside of AWS services (API Gateway, ALB, NLB, etc.), such as an EC2 instance running Nginx that needs a physical certificate file. Additionally, even if you request it, AWS Certificate Manager does not display the certificate content.
- AWS LetsEncrypt Lambda or Why I Wrote a Custom TLS Provider for AWS Using OpenTofu and Go
These days, it's challenging to imagine systems that have public API endpoints without TLS certificate protection. There are several ways to issue certificates: Paid wildcard certificates that can be bought from any big TLS provider Paid root certificates that sign all downstream certificates that are issued by corporate PKI systems Free certificates issued by TLS providers like LetsEncrypt or AWS Certificate Manager Self-signed certificates, issued by OpenSSL or another tool Within the context of this post, I will mainly discuss free certificates that can be used inside of AWS, but not only by AWS services. Clearly, using anything other than AWS Certificate Manager makes no sense if you exclusively use managed AWS services and don't have strict security requirements. AWS Certificate Manager offers a very convenient and speedy method of issuing certificates via DNS or HTTP challenges; however, you face basic AWS limitations if you need to use these certificates outside of AWS services (API Gateway, ALB, NLB, etc.), such as an EC2 instance running Nginx that needs a physical certificate file. Additionally, even if you request it, AWS Certificate Manager does not display the certificate content.
- Unit Integration Testing With Testcontainers Docker Compose
Is your test dependent on multiple other applications and do you want to create an integration test using Testcontainers? Then the Testcontainers Docker Compose Module is the solution. In this blog, you will learn how convenient it is to create an integration test using multiple Testcontainers. Enjoy! Introduction Using Testcontainers is a very convenient way to write a unit integration test. Most of the time, you will use it in order to test the integration with a database, a message bus, etc. But what if your application interacts with an application that consists of multiple containers? How can you add these containers to your unit integration test? The answer is quite simple: you should use the Docker Compose Module.
- Unit Integration Testing With Testcontainers Docker Compose
Is your test dependent on multiple other applications and do you want to create an integration test using Testcontainers? Then the Testcontainers Docker Compose Module is the solution. In this blog, you will learn how convenient it is to create an integration test using multiple Testcontainers. Enjoy! Introduction Using Testcontainers is a very convenient way to write a unit integration test. Most of the time, you will use it in order to test the integration with a database, a message bus, etc. But what if your application interacts with an application that consists of multiple containers? How can you add these containers to your unit integration test? The answer is quite simple: you should use the Docker Compose Module.
- Kubernetes Observability: Lessons Learned From Running Kubernetes in Production
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. In recent years, observability has re-emerged as a critical aspect of DevOps and software engineering in general, driven by the growing complexity and scale of modern, cloud-native applications. The transition toward microservices architecture as well as complex cloud deployments — ranging from multi-region to multi-cloud, or even hybrid-cloud, environments — has highlighted the shortcomings of traditional methods of monitoring.
- Kubernetes Observability: Lessons Learned From Running Kubernetes in Production
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. In recent years, observability has re-emerged as a critical aspect of DevOps and software engineering in general, driven by the growing complexity and scale of modern, cloud-native applications. The transition toward microservices architecture as well as complex cloud deployments — ranging from multi-region to multi-cloud, or even hybrid-cloud, environments — has highlighted the shortcomings of traditional methods of monitoring.
- Using AWS WAF Efficiently to Secure Your CDN, Load Balancers, and API Servers
The introduction of software has made remarkable changes to how business is conducted. "Back then," people would meet in person, and most companies used manual methods, which were not scalable. Software has changed the game, and web applications are essential for a business's success. Software is how customers interact with businesses, share their data, and receive goods and services. Software-as-a-service (SaaS) has become a giant industry, taking care of hosting services used by customers by upgrading, scaling, and securing customer data. With the massive proliferation of SaaS services, many are using AWS, and security is a big concern. Malicious actors seek to steal customer data or DDoS-ing the service to prevent legitimate customers from accessing the website.
- Guarding Kubernetes From the Threat Landscape: Effective Practices for Container Security
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. Kubernetes is driving the future of cloud computing, but its security challenges require us to adopt a full-scale approach to ensure the safety of our environments. Security is not a one-size-fits-all solution; security is a spectrum, influenced by the specific context in which it is applied. Security professionals in the field rarely declare anything as entirely secure, but always as more or less secure than alternatives. In this article, we are going to present various methods to brace the security of your containers.
- Guarding Kubernetes From the Threat Landscape: Effective Practices for Container Security
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. Kubernetes is driving the future of cloud computing, but its security challenges require us to adopt a full-scale approach to ensure the safety of our environments. Security is not a one-size-fits-all solution; security is a spectrum, influenced by the specific context in which it is applied. Security professionals in the field rarely declare anything as entirely secure, but always as more or less secure than alternatives. In this article, we are going to present various methods to brace the security of your containers.
- Azure Deployment Using FileZilla
In today's digital landscape, deploying web applications to the cloud is a common practice. Azure provides various deployment options, including GitHub, Azure DevOps, Bitbucket, FTP, or a local Git repository. In this step-by-step guide, we will focus on the FileZilla FTP client as a means to publish your Angular UI application to Azure. Follow these steps to make your Angular app accessible to the world.
- Building a CI/CD Pipeline With Kubernetes: A Development Guide With Deployment Considerations for Practitioners
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. In the past, before CI/CD and Kubernetes came along, deploying software to Kubernetes was a real headache. Developers would build stuff on their own machines, then package it and pass it to the operations team to deploy it on production. This approach would frequently lead to delays, miscommunications, and inconsistencies between environments. Operations teams had to set up the deployments themselves, which increased the risk of human errors and configuration issues. When things went wrong, rollbacks were time consuming and disruptive. Also, without automated feedback and central monitoring, it was tough to keep an eye on how builds and deployments were progressing or to identify production issues.
- Building a CI/CD Pipeline With Kubernetes: A Development Guide With Deployment Considerations for Practitioners
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. In the past, before CI/CD and Kubernetes came along, deploying software to Kubernetes was a real headache. Developers would build stuff on their own machines, then package it and pass it to the operations team to deploy it on production. This approach would frequently lead to delays, miscommunications, and inconsistencies between environments. Operations teams had to set up the deployments themselves, which increased the risk of human errors and configuration issues. When things went wrong, rollbacks were time consuming and disruptive. Also, without automated feedback and central monitoring, it was tough to keep an eye on how builds and deployments were progressing or to identify production issues.
- Building a Food Inventory Management App With Next.js, Material-UI, Firebase, Flask, and Hugging Face
These days, restaurants, food banks, home kitchens, and any other business that deals with products and foods that go bad quickly need to have good food inventory management. Kitchens stay organized and waste is kept to a minimum by keeping track of stock, checking expiration dates, and managing usage well. I will show you how to make a Food Inventory Management App in this guide. With this app, users can:
- A Decade of Excellence: The Journey, Impact, and Future of Kubernetes
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. A decade ago, Google introduced Kubernetes to simplify the management of containerized applications. Since then, it has fundamentally transformed the software development and operations landscape. Today, Kubernetes has seen numerous enhancements and integrations, becoming the de facto standard for container orchestration.
- A Decade of Excellence: The Journey, Impact, and Future of Kubernetes
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Kubernetes in the Enterprise: Once Decade-Defining, Now Forging a Future in the SDLC. A decade ago, Google introduced Kubernetes to simplify the management of containerized applications. Since then, it has fundamentally transformed the software development and operations landscape. Today, Kubernetes has seen numerous enhancements and integrations, becoming the de facto standard for container orchestration.
- Using Spring AI With LLMs to Generate Java Tests
The AIDocumentLibraryChat project has been extended to generate test code (Java code has been tested). The project can generate test code for publicly available GitHub projects. The URL of the class to test can be provided then the class is loaded, the imports are analyzed and the dependent classes in the project are also loaded. That gives the LLM the opportunity to consider the imported source classes while generating mocks for tests. The testUrl can be provided to give an example to the LLM to base the generated test. The granite-code and deepseek-coder-v2 models have been tested with Ollama. The goal is to test how well the LLMs can help developers create tests.
- Geo-Location Redirects With AWS CloudFront
In the age of global digital services, geo-location-based content is critical for improving user experience and engagement, especially if you implement any shops or subscription services that should be adopted by the local market. AWS CloudFront is one of the most widely used content delivery network (CDN) systems. It includes certain necessary features to implement geo-location-based redirection out of the box, but not in a single click.
- Stream Processing in the Serverless World
It’s a very dynamic world today. Information moves fast. Businesses generate data constantly. Real-time analysis is now essential. Stream processing in the serverless cloud solves this. Gartner predicts that by 2025, over 75% of enterprise data will be processed outside traditional data centers. Confluent states that stream processing lets companies act on data as it's created. This gives them an edge. Real-time processing reduces delays. It scales easily and adapts to changing needs. With a serverless cloud, businesses can focus on data insights without worrying about managing infrastructure.
- Stream Processing in the Serverless World
It’s a very dynamic world today. Information moves fast. Businesses generate data constantly. Real-time analysis is now essential. Stream processing in the serverless cloud solves this. Gartner predicts that by 2025, over 75% of enterprise data will be processed outside traditional data centers. Confluent states that stream processing lets companies act on data as it's created. This gives them an edge. Real-time processing reduces delays. It scales easily and adapts to changing needs. With a serverless cloud, businesses can focus on data insights without worrying about managing infrastructure.
- Maximizing Cloud Network Security With Next-Generation Firewalls (NGFWs): Key Strategies for Performance and Protection
As cloud networks continue to expand, security concerns become increasingly complex, making it critical to ensure robust protection without sacrificing performance. One key solution organizations use to achieve this balance is the deployment of Next-Generation Firewalls (NGFWs), which play an essential role in securing cloud environments. These advanced firewalls are integral to cloud security strategies, combining multiple layers of defense with optimized performance to tackle evolving threats. Understanding NGFWs in Cloud Networks To fully appreciate the scope of cloud network security, it's essential to understand both the capabilities of NGFWs and how they integrate into broader cloud security approaches. NGFWs go beyond traditional firewalls, offering advanced features such as deep packet inspection, application-level filtering, intrusion prevention systems (IPS), SSL/TLS inspection, and threat intelligence integration. Together, these tools help NGFWs safeguard dynamic cloud environments from sophisticated attacks, ensuring a higher level of protection.
- Using AI in Your IDE To Work With Open-Source Code Bases
Thanks to langchaingo, it's possible to build composable generative AI applications using Go. I will walk you through how I used the code generation (and software development in general) capabilities in Amazon Q Developer using VS Code to enhance langchaingo. Let's get right to it!
- Leveraging IBM WatsonX Data With Milvus to Build an Intelligent Slack Bot for Knowledge Retrieval
In today's fast-paced work environment, quick and easy access to information is crucial for maintaining productivity and efficiency. Whether it's finding specific instructions in a runbook or accessing key knowledge transfer (KT) documents, the ability to retrieve relevant information swiftly can make a significant difference. This tutorial will guide you through building an intelligent Slack bot that leverages IBM WatsonX.data and Milvus for efficient knowledge retrieval. By integrating these tools, you'll create a bot that can search and provide answers to queries based on your organization's knowledge sources. We will use IBM WatsonX.data to populate and query relevant documents and IBM WatsonX.ai to answer questions from the fetched documents.
- Assigning Pods to Nodes Using Affinity Rules
This article describes how to configure your Pods to run in specific nodes based on affinity and anti-affinity rules. Affinity and anti-affinity allow you to inform the Kubernetes Scheduler whether to assign or not assign your Pods, which can help optimize performance, reliability, and compliance. There are two types of affinity and anti-affinity, as per the Kubernetes documentation:
- Assigning Pods to Nodes Using Affinity Rules
This article describes how to configure your Pods to run in specific nodes based on affinity and anti-affinity rules. Affinity and anti-affinity allow you to inform the Kubernetes Scheduler whether to assign or not assign your Pods, which can help optimize performance, reliability, and compliance. There are two types of affinity and anti-affinity, as per the Kubernetes documentation:
- Low-Code AI Agent for Classifying User Support Tickets With OpenAI and Kumologica
This article will demonstrate how Kumologica and OpenAI can assist in developing an AI agent API that efficiently classifies cases within an enterprise using user data, without the need for customer support agent intervention. An ideal case management solution should be able to automatically categorize and prioritize cases from various channels, identify high-priority cases, apply labels, and assign them to the appropriate groups without requiring agents to manually perform these tasks. There are several case management products currently available on the market, such as ServiceNow, JIRA, and Salesforce Case Management. While some of these products include built-in solutions for ticket classification, others offer no such functionality. Even for those with built-in solutions, there are limitations in their ability to integrate with third-party systems.
- Maturing an Engineering Organization From DevOps to Platform Team
The DevOps model broke down the wall between development and production by assigning deployment and production management responsibilities to the application engineers and providing them with infrastructure management tools. This approach expanded engineers' competencies beyond their initial skill sets. This model helped companies gain velocity as applications weren't passed around from team to team, and owners became responsible from ideation to production. It shortened the development lifecycle and time to deployment, making companies more agile and responsive.
- Securing Your Azure Kubernetes Services Cluster
In this article, I will present my perspective on securing an Azure Kubernetes cluster with the principle of least privilege as a top priority. I will explain the available built-in Azure Kubernetes Roles, the function of the Microsoft Entra (formerly Azure Active Directory) groups, and how to utilize Kubernetes RBAC to manage access to the workloads. Photo by "ArminH" on Freeimages.com
- Securing Your Azure Kubernetes Services Cluster
In this article, I will present my perspective on securing an Azure Kubernetes cluster with the principle of least privilege as a top priority. I will explain the available built-in Azure Kubernetes Roles, the function of the Microsoft Entra (formerly Azure Active Directory) groups, and how to utilize Kubernetes RBAC to manage access to the workloads. Photo by "ArminH" on Freeimages.com
- Simplifying Multi-Cloud Observability With Open Source
Gartner predicts by 2028, 50% of enterprises will utilize the cloud. The growth has also seen an increase in different strategies for organizations to use the cloud. Initially, organizations were completely on-prem, then they were hybrid where some workloads were still on-prem but some were migrated to the cloud. Eventually, companies started moving to multi-cloud where they use more than one cloud provider to host their workloads. A recent Oracle survey indicates that 98% of enterprises are either considering or already implementing a multi-cloud strategy. So what are the motivations for these enterprises to move towards multi-cloud?
- Optimizing External Secrets Operator Traffic
In Kubernetes, a Secret is an object that stores sensitive information like a password, token, key, etc. One of the several good practices for Kubernetes secret management is making use of a third-party secrets store provider solution to manage secrets outside of the clusters and configuring pods to access those secrets. There are plenty of such third-party solutions available in the market, such as: HashiCorp Vault Google Cloud Secret Manager AWS Secrets Manager Azure Key Vault These third-party solutions, a.k.a External Secrets Managers (ESM), implement secure storage, secret versioning, fine-grain access control, audit and logging.
- Exploring the Sidecar Pattern in Cloud-Native Architecture
Distributed services have indeed revolutionized the design and deployment of applications in the modern world of cloud-native architecture: flexibility, scalability, and resilience are provided by these autonomous, loosely coupled services. This also means that services add complexity to our systems, especially with cross-cutting concerns such as logging, monitoring, security, and configuration. As a fundamental design concept, the sidecar pattern enhances the distributed architecture in a seamless and scalable manner. Throughout this article, we explore what the sidecar pattern offers, its use cases, and why it has become so widely used in cloud-native environments.
- Reducing Infrastructure Misconfigurations With IaC Security
Infrastructure as Code (IaC) became the de facto standard for managing infrastructure resources for many organizations. According to Markets and Markets, a B2B research firm, the IaC market share is poised to reach USD 2.3 Billion by 2027. What Is Infrastructure as Code? Before IaC, a developer would use the cloud provider GUI, clicking through different configurations and settings to provision a resource like a Virtual Machine. When you need to provision a single instance, this is easy, but modern workloads are more than one single machine, 1000s of VMs, and hundreds of storages — not to forget this is for one region. To achieve high availability, the same stamp needs to be created in multiple regions and availability zones. One way organizations automated this process is, through scripts, though it had challenges like versioning and, most importantly, the redundancy of each team repeatedly creating scripts from scratch.
- Blue-Green Deployment: Update Your Software Risk Free
Anton Alputov, the DevOps architect of Valletta Software Development, shared his DevOps expertise both with me and with the readers. Deploying software updates can often feel like walking a tightrope — one wrong step, and you risk downtime, bugs, or a frustrating user experience. Traditional deployment methods tend to amplify these risks, leaving teams scrambling to mitigate issues post-release. Blue-green deployment (BGD) offers a powerful alternative, enabling a smoother, safer way to release new versions of your applications.
- Redefining Artifact Storage: Preparing for Tomorrow's Binary Management Needs
As software pipelines evolve, so do the demands on binary and artifact storage systems. While solutions like Nexus, JFrog Artifactory, and other package managers have served well, they are increasingly showing limitations in scalability, security, flexibility, and vendor lock-in. Enterprises must future-proof their infrastructure with a vendor-neutral solution that includes an abstraction layer, preventing dependency on any one provider and enabling agile innovation. The Current Landscape: Artifact and Package Manager Solutions There are several leading artifact and package management systems today, each with its own strengths and limitations. Let’s explore the key players:
- New Era of Cloud 2.0 Computing: Go Serverless!
Serverless computing is one of the fastest-changing landscapes in cloud technology and has often been termed the next big revolution in Cloud 2.0. In the digital transformation journeys of every organization, serverless is finding a place as a key enabler by letting companies offload the business of infrastructure management and focus on core application development. About Serverless Architecture Applications on a serverless architecture would be event-driven, meaning that functions are only invoked on particular events, like HTTP requests, database updates, and messages ingress. That not only simplifies the development process but increases operational efficiency because developers would have to focus only on writing and deploying code, instead of fiddling with the management of servers.
- 5 Best Practices for Data Warehousing
A data warehouse is a centralized repository that consolidates data from multiple sources to enable comprehensive analysis and support business decision-making. It stores large volumes of historical data, often spanning months or years, making it accessible for trend analysis, reporting, and informed decision-making across organizations. Investing in a data warehouse can help companies create a vault of valuable business information. It is a great way to compile and use statistics effectively. What should IT and business leaders know before developing one?
- 5 Best Practices for Data Warehousing
A data warehouse is a centralized repository that consolidates data from multiple sources to enable comprehensive analysis and support business decision-making. It stores large volumes of historical data, often spanning months or years, making it accessible for trend analysis, reporting, and informed decision-making across organizations. Investing in a data warehouse can help companies create a vault of valuable business information. It is a great way to compile and use statistics effectively. What should IT and business leaders know before developing one?