Frequently Asked Questions (FAQ)
Platform Overview & SEO FAQs
What is Rackrr?
Rackrr is an aggregated cloud GPU provider offering on-demand and reserved GPU instances for AI training, inference, and production workloads, with more flexibility and lower cost than hyperscalers.
What types of GPUs does Rackrr offer?
Rackrr provides access to enterprise-grade GPUs such as A100, H100, L40, and other data-center GPUs, depending on region and operator availability.
Is Rackrr an alternative to AWS, Azure, or Google Cloud GPUs?
Yes. Rackrr is commonly used as a cost-effective alternative to hyperscaler GPU services, offering flexible pricing, reduced vendor lock-in, and access to non-hyperscaler GPU supply.
Who is Rackrr best suited for?
Rackrr is designed for AI startups, scaleups, research teams, and infrastructure-heavy organizations that need reliable GPU compute without long-term hyperscaler commitments.
Does Rackrr support on-demand and reserved GPU capacity?
Yes. Rackrr offers on-demand GPUs for short-term or burst workloads and reserved capacity for predictable performance and cost control.
Can Rackrr be used for LLM training and inference?
Yes. Rackrr supports training, fine-tuning, and inference for large language models (LLMs), multimodal models, and generative AI pipelines.
Is Rackrr suitable for production AI workloads?
Yes. Rackrr is built to support production-grade AI workloads, not just experimentation, with isolated virtual machines and persistent storage options.
Is Rackrr cheaper than hyperscaler GPU services?
Rackrr is often more cost-effective than hyperscaler GPU offerings, especially for GPU-intensive or long-running workloads, due to flexible pricing and diversified GPU supply.
Does Rackrr offer regional or local GPU availability?
Yes. Rackrr provides access to GPUs across multiple regions, depending on operator capacity, enabling teams to optimize for latency, cost, or data residency needs.
Does Rackrr provide human support for GPU workloads?
Yes. Rackrr offers direct access to a technical support team familiar with GPU infrastructure and AI workloads, rather than automated-only support.
Billing & Account Management
What happens if my Rackrr account balance reaches $0?
If your account balance reaches or falls below $0, all active Rackrr compute resources—including virtual machines (VMs) and attached storage—are automatically terminated. Terminated resources and their data cannot be recovered. To avoid disruption to AI workloads, maintain a positive balance or use reserved or post-pay arrangements where applicable.
What is the minimum balance required to start a GPU instance?
A minimum balance of $10 USD is required to launch a virtual machine or create a storage volume on Rackrr.
What payment methods does Rackrr accept?
Rackrr accepts credit and debit cards. All payments are processed securely via Stripe, a PCI-compliant payment provider.
Does Rackrr support post-pay or invoiced billing?
Yes. Rackrr offers post-pay billing for enterprise customers, universities, research institutions, and approved partners. Post-pay allows teams to run AI workloads without individual credit cards. To request post-pay access, contact admin@rackrr.ai.
How do I reset or change my password?
Use the "Forgot Password" option on the Rackrr sign-in page to reset your credentials securely.
How do I contact Rackrr support?
Rackrr support can be reached via:
- Email: admin@rackrr.ai
- In-app: Support ticket or chat widget
To speed up resolution, include:
- Resource details: VM ID or resource name
- Issue summary: A brief description of the problem
Typical response time is within 4 business hours.
How are users notified about maintenance or outages?
Rackrr sends email notifications to registered users ahead of any planned maintenance or service interruptions whenever possible.
Are additional support channels available?
Yes. Enterprise and partner customers may receive support via Slack, Telegram, or Discord upon request.
Compute, Security & AI Workloads
How does Rackrr secure data on GPU virtual machines?
Rackrr uses kernel-level virtualization (KVM / QEMU) to ensure strong isolation between workloads. Each virtual machine operates in a secure, isolated environment, and Rackrr does not access or inspect customer data.
What happens when a virtual machine is terminated?
When a VM is terminated, all data on that VM is permanently deleted. Rackrr does not retain backups or copies. Users should store critical data on detachable volumes or external storage.
Where can I see available GPU models and regions?
Available GPU models, regions, and pricing are visible in the Rackrr dashboard after signing in. Availability may vary based on regional and operator capacity.
Can I run Jupyter notebooks on Rackrr?
Yes. Rackrr supports Jupyter notebooks on compatible Linux environments. Setup instructions are available in the Training & Best Practices documentation.
I lost my SSH credentials. Can Rackrr recover them?
No. For security reasons, Rackrr does not store or recover SSH credentials. If access is lost, the VM must be reset or recreated.
Storage & Data Management
What is the difference between VM storage and detachable volumes?
VM storage is local disk space attached to a specific virtual machine. Detachable volumes are persistent storage volumes that can be mounted across different VMs over time.
Detachable volumes are suitable for:
- Datasets
- Model checkpoints
- Shared artifacts
- Long-term storage
They are typically more cost-effective than VM-local storage.
How is data stored on Rackrr secured?
All stored data is encrypted using AES-256 and distributed using erasure coding for resilience and security. Rackrr does not access customer data.
Is there a minimum storage retention period?
No. Storage can be deleted at any time with no minimum commitment or penalties.
How is storage pricing calculated?
Storage pricing is transparent:
- $0.01 per GB per month stored
- $0.01 per GB downloaded
There are no hidden fees.
Is there a minimum balance required to create a storage volume?
Yes. A minimum balance of $10 USD is required to create a detachable storage volume.
AI, LLMs & Performance
Can Rackrr be used to train large language models (LLMs)?
Yes. Rackrr supports LLM training, fine-tuning, and inference depending on GPU availability and configuration.
Does Rackrr support inference as well as training?
Yes. Rackrr supports batch inference, real-time inference, and offline evaluation jobs. Different workload types can be matched to different GPU and pricing models.
How does Rackrr compare to hyperscaler GPU services?
Rackrr provides access to diverse GPU supply beyond hyperscalers, flexible pricing models, reduced lock-in, and regional availability. This makes Rackrr suitable for teams prioritizing cost control and operational flexibility.
Partners & Volume Usage
Does Rackrr offer volume discounts or partner pricing?
Yes. Rackrr offers volume-based pricing and partner arrangements depending on workload size, duration, and GPU requirements. To discuss pricing, contact admin@rackrr.ai.