Basic Concepts
What Is CCI?
Cloud Container Instance (CCI) is a serverless container service that allows you to run containers without creating or managing server clusters.
With the serverless architecture, you can focus on building and operating applications without having to create or manage servers, or worrying about server health. All you have to do is to specify resource requirements (such as the required vCPUs and memory). This gives you a more focused approach to business needs and helps you reduce management and maintenance costs. Traditionally, to run containerized workloads using Kubernetes, you need to create a Kubernetes cluster first. With CCI, you can create and run containerized workloads using the console, APIs, or ccictl without creating or managing Kubernetes clusters. You only pay for the resources used by the containers.
CCI provides the following functions:
- Automated continuous delivery (CD)
CCI allows you to verify every container image change by running the new container images in just a few clicks.
- Fully hosted workloads during runtime
Deployments are fully hosted to ensure that applications can run stably.
- Ultra-fast auto scaling
You can create custom auto scaling policies for automatic scaling within seconds.
- High availability for applications
Multiple pods can provide services externally at the same time, and global load balancing ensures that no pods are overloaded.
- Container status monitoring
The health of containers can be checked, and container metrics are monitored in real time.
- Persistent data storage
Storage volumes can be mounted to containers for persistent data storage.
What Are the Differences Between Cloud Container Instance and Cloud Container Engine?
Huawei Cloud provides enterprise-class container services with high performance, availability, and security. There are two types of container services that have been officially certified by CNCF and developed based on the Kubernetes ecosystem: Cloud Container Engine (CCE) and Cloud Container Instance (CCI).
Two cloud services differ from each other in the following aspects: Service introduction, Cluster creation, Billing modes, and Application scenarios.
- Service introduction
Table 1 Introduction to CCE and CCI CCE
CCI
CCE provides highly scalable, high-performance, enterprise-class Kubernetes clusters and supports Docker containers. CCE is a one-stop container platform that provides full-stack container services from Kubernetes cluster management, lifecycle management of containerized applications, application service mesh, and Helm charts to add-on management, application scheduling, and monitoring and O&M. With CCE, you can easily deploy, manage, and scale containerized applications on Huawei Cloud. For details, see What Is Cloud Container Engine?
Traditionally, to run containerized workloads using Kubernetes, you need to create a Kubernetes cluster first.
CCI is a serverless container service that allows you to run containers without creating or managing server clusters. With CCI, you only need to manage containerized services. You can quickly create and run workloads on CCI without managing clusters and servers. Because of the serverless architecture, CCI frees you from containerized application O&M and allows you to focus on the services. For details, see What Is CCI?
- Cluster creation
Table 2 Cluster creation CCE
CCI
CCE is a hosted Kubernetes service for container management. It allows you to create native Kubernetes clusters with just a few clicks.
You need to create clusters and nodes on the console, but you do not need to manage master nodes.
CCI provides a serverless container engine. When deploying containers on Huawei Cloud, you do not need to purchase and manage ECSs, eliminating the need for O&M and management.
You can start applications without the need to create clusters, master nodes, or worker nodes.
- Billing modes
Table 3 Billing modes Item
CCE
CCI
Pricing
Resources
vCPUs (vCPU-hour) and memory (GiB-hour)
Billing mode
Yearly/Monthly or pay-per-use
Pay-per-use and packages
Minimum pricing unit
Hour
Second
- Application scenarios
Table 4 Application scenarios CCE
CCI
All scenarios. Generally, CCE runs large-scale and long-term stable applications, such as e-commerce, service middle-end, and IT system.
Batch computing, high-performance computing, scale-out during traffic spikes, and CI/CD tests
- Scheduling workloads from CCE to CCI
The CCE Cloud Bursting Engine for CCI add-on can schedule Deployments, StatefulSets, and jobs running on CCE to CCI when there are traffic spikes. This can reduce consumption caused by cluster scaling.
The following are benefits of this add-on:
- Automatic pod scaling within seconds: When CCE cluster resources are insufficient, there is no need to add nodes to the CCE cluster, and the CCE Cloud Bursting Engine for CCI add-on automatically creates pods in CCI, eliminating the overhead of resizing the CCE cluster.
- CCI seamlessly works with Huawei Cloud SWR so that you can use public and private images in SWR repositories.
- CCI supports event synchronization, monitoring, logging, remote command execution, and status query for pods.
- You can view the capacity information about virtual elastic nodes.
- Service networks of CCE pods and CCI pods can communicate with each other.
For details, see Scheduling CCE Workloads to CCI.
What Is an Environment Variable?
An environment variable is a variable whose value can affect the way a running container will behave. You can modify environment variables even after workloads are deployed, increasing flexibility in workload configuration.
The result of setting environment variables in CCI is the same as that of specifying ENV in a Dockerfile.
What Is Mcore?
A millicore, abbreviated as mcore, is one-thousandth of a vCPU. Generally, the vCPU usage of a containerized workload is measured in mcores.
What Are the Relationships Between Images, Containers, and Workloads?
- An image is a special file that includes all the programs, libraries, resources, and configurations for running containers. It also includes some parameters required for the runtime, such as anonymous volumes, environment variables, and users. An image does not contain any dynamic data, and its content remains unchanged after being built.
- A container is a runtime instance of an image. An image is like a class and a container is like an instance in the object-oriented program design. A container can be created, started, stopped, deleted, or suspended.
- A workload is an application running on one or more pods. A pod consists of one or more containers. Each container is created from a container image.
The following figure shows the relationships between images, containers, and workloads.

What Are vCPU-Hours in CCI Packages?
1 vCPU-hour = 1 × 3,600 vCPU-seconds
One vCPU-hour means that a vCPU is used for one hour.
One vCPU-second means that a vCPU is used for one second.
Case 1:
If a Deployment uses 2.5 vCPUs for two consecutive hours, 5 vCPU-hours are used: 2.5 vCPUs × 2 hours = 5 vCPUs × 3,600 seconds.
Case 2:
If you purchase a package with 730 vCPU-hours, you can allow containers to run 730 vCPUs for one hour or run one vCPU for 730 hours.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot