Planning for highly variable workloads has always been a difficult art, with the attendant risks of over-provisioning (leading to waste) or under-provisioning (leading to poor performance) of computing resources, but container-based solutions can handle the challenge in ways that would have been difficult in a static VM world.
To address this in some common cases, KEDA (Kubernetes Event-Driven Autoscaling) provides a way to automatically scale containers based on metrics such as IBM MQ queue depths, allowing scaling up as needed and then down to zero when the load dies away.