Overview Data Care offers a robust cloud infrastructure specifically engineered for GPU-accelerated applications. Leveraging the power of bare metal Kubernetes, our platform is tailored to enhance performance and reliability for inference-based applications. With Data Care, you gain the efficiency and scalability necessary to handle complex computing tasks with ease, ensuring that your data-driven solutions are both powerful and cost-effective.
After logging in successfully, you will see a welcome page
Your projects and namespaces will appear when you click on the cluster
Look for the projects or namespaces dedicated for you and start deploying. If the whole cluster is given to you, there is no need in looking for the namespace
Go to the workloads tab on the left navigation bar. Deploy your workload as a deployment, daemonset or statefulset. It depends on the usage of your application.
If you want to create a deployment, navigate to “Workload> Deployments> create”
If looking for a daemonset, “Workload> DaemonSets> create” (Daemonsets cannot be created if choosing namespace level projects)
StatefulSets - “Workload> StatefulSets> create”
Data Care Products
We provide two products i.e namespace or a dedicated cluster
In the namespace level product we will create a namespace inside a cluster
We will be applying the necessary resources quotas based on your requirement for this namespace before assigning it to you.
By default resource quotas for this namespace would be 20GPU’s,60 Cores of CPU and 480GB of RAM and 2 TB storage
In the namespace dedicated to you you can create workloads and services and manage them.
But for the cluster level product when you login to rancher you only see the dedicated cluster created for you.
Inside this dedicated cluster you can create and access all the namespaces within the cluster
For your dedicated cluster we can provide you with the usage metrics managed by prometheus and grafana and the storage that is managed by longhorn you can access these from the cluster dashboard.
For your reference even the cluster level product would be having a sample deployment.
If you're going with the dedicated cluster the default resources assigned for this cluster would be 40GPUs, 120 Cores of CPU, 960 GB of ram and 5TB storage