Skip to main content
Lower GPU pricing, faster startup, and smoother custom networking.
Pricing update:
- RTX 4090: 0.20 EUR/hour
- RTX 5090: 0.40 EUR/hour
Hivenet now offers the most affordable high-quality GPU compute available.
Enhanced networking:
- Custom TCP, UDP, and HTTPS port configuration for improved flexibility.
- Upgraded SSH documentation and in-app help link for easier setup.
Upgraded base container images:
- Ubuntu 24.04 LTS, CUDA 12.8, and pre-configured Hugging Face cache.
- Faster time-to-first-training and consistent environments across GPUs.
User experience improvements:
- Updated sign-up profiling questions for clarity.
- Last seen tracking for active users on Compute.
Expanded vLLM model catalog, faster launch times, and improved stability.
New pre-packaged vLLM models added (total of 10 now available):
- Meta Llama-3.1 8B Instruct
- Mistral Small-3.1 24B Instruct
- Llama 3.3 70B Instruct
- Mistral Small-24B Instruct
- Qwen-2.5 VL 32B Instruct
- GPT OSS 20B
Improved launch speed for LLM models with local caching (70B models can take up to 45 minutes).Custom credit amounts now available for customers.New user profiling on first sign-up or login (optional).Stability improvements and bug fixes across workflows.
HTTPS support, new inference options, and smoother instance setup.
HTTPS services are now available.vLLM inference servers added.Improved instance flow for smoother setup and management.More connectivity options introduced.
Instance controls, custom templates, and RTX 5090 support.
Stop and start your Compute instances.Custom templates now supported.Added RTX 5090 support.