DALL-E Mini and all-new free Ampere GPUs for Growth plan subscribers! 🧑‍🎨

The hype around the internet for DALL-E 2 and DALL-E Mini has been building for weeks -- and now you can train your own DALL-E Mini model on a Gradient Notebook! 

Let's get into the updates.

DALL-E Mini runtime now live on Gradient! announcement 

We're excited to release a new runtime tile for DALL-E Mini. The runtime is based on JAX and makes it easy to create generative art on high-powered Paperspace GPUs.

To get started head over to the console, create a new notebook, select the DALL-E Mini tile and get going!

New ultra-powerful A4000 and A5000 GPUs now FREE on Gradient Growth plan! announcement 

As we continue to offer the best selection of cloud GPUs on the market we also continue to extend our lead in the number of unlimited instances we offer to Gradient subscribers.

We've just added A4000 and A5000 machines to the Gradient Growth plan, which means the list of free GPUs available on Growth is longer than ever.

Check out all the free GPUs available to Gradient subscribers below!


GPU
Price
Architecture
Launch Year
GPU RAM
CPUs
System RAM
Current Street Price (2022)
M4000
Free (Gradient Free-tier)
Maxwell
2015
8 GB
8 vCPU
30 GB
$433
P4000
$8/mo (Gradient Pro)
Pascal
2017
8 GB
8 vCPU
30 GB
$859
P5000
$8/mo (Gradient Pro)
Pascal
2016
16 GB
8 vCPU
30 GB
$1,795
RTX4000
$8/mo (Gradient Pro)
Turing
2018
8 GB
8 vCPU
30 GB
$1,247
RTX5000
$8/mo (Gradient Pro)
Turing
2018
16 GB
8 vCPU
30 GB
$2,649
A4000
$8/mo (Gradient Pro)
Ampere
2021
16 GB
8 vCPU
45 GB
$1,099
A5000
$39/mo (Gradient Growth)
Ampere
2021
24 GB
8 vCPU
45 GB
$2,516
A6000
$39/mo (Gradient Growth)
Ampere
2020
48 GB
8 vCPU
45 GB
$4,599


For more information, be sure to read the docs.


Updated PyTorch, TensorFlow, and RAPIDS runtimes announcement 

We also wanted to let you know that we've rolled out updated Notebook runtimes for PyTorch, TensorFlow, and RAPIDS. 

In the notebook console you'll now find 1-click runtime tiles for PyTorch 1.12, TensorFlow 2.9.1, and RAPIDS 20.6.


Is there another runtime you wish we'd support out of the box? Let us know!

Introducing Gradient Datasets and an all-new Gradient Notebooks IDE! 🧑‍🚀

We're excited to announce the arrival of a new and improved Gradient Notebooks experience -- now with Gradient Datasets, native support for interactive widgets, improved cell, file, and kernel management experiences, and much more!

Highlights are below but be sure to read the blogpost for the most detailed explanation of this release.

Introducing Gradient Datasets announcement 

We're pleased to announce the arrival of Gradient Datasets! Datasets make it easy to generate portable datasets to use across Gradient teams and resources.

You can now create and mount datasets for easy use within a notebook and take advantage of a number of public datasets made available by the Paperspace team. 

Check out the blogpost for a full list of public datasets. 

Support for interactive widgets improvement 

Gradient Notebooks now provides first-class support for ipywidgets! This includes sliders, checkboxes, multiselects, TensorFlow and PyTorch dataloaders, and more! 

The full list of supported widgets is available here

Cell management improvements improvement 

We've brought over to notebooks a number of cell management operations from JupyterLab such as insert, join, split, and more! 

We'll be continuing to add cell management capabilities to the new IDE over time. 

File management improvements improvement 

In addition to cell management improvements, we've also made it easier to manage and manipulate files within the notebook file browser. 

The file manager now behaves as expected when dragging and dropping files and folders.

Kernel management controls improvement  

We've improved the controls for starting and stopping individual kernels from within a notebook. 

It's now easy to assign a notebook file to a particular kernel and to restart and stop individual kernels!

Bonus for Pro/Growth users: terminal updates! improvement 

For users on the Pro or Growth plan, we've enabled split-screen terminals! 

Now it's possible to work in a terminal without leaving a notebook file!

More improvements improvement

  • We improved resource allocation and decreased notebook pending timeouts for notebooks which means higher availability of notebook machines and fewer stalled notebook starts
  • We improved the refresh rate of notebook logs and improved notebook metrics to display more useful information
  • We updated the two most popular runtime tiles in Gradient Notebooks: PyTorch and TensorFlow! The latest distribution takes advantage ofPyTorch 1.11 and TensorFlow 2.7.0.

Bugfixes

  • We fixed an issue that sometimes caused users to be signed out of the Paperspace console when swapping between tabs or sessions
  • We fixed an issue that sometimes caused users with multiple teams to view incorrect resource data
  • We fixed an issue that sometimes caused deployment items to expire and be deleted on teams with a large number workflows and deployments read the blogpost

Introducing a new docs experience for Core and Gradient! 📚

New docs come to Paperspace! announcement 

We're excited to introduce an entirely new unified docs experience for Paperspace! 

After maintaining several different systems for documenting different parts of the product, we're eager to announce that Paperspace docs are now available in a single location with a new unified theme and organizational structure!

You can now find Core documentation, Gradient documentation, and general Account Management documentation all in one place!

If you need a place to start, we recommend starting with the Core overview or the Gradient overview -- you'll be able to launch right into tutorials, guides, and reference materials designed to help you succeed with Paperspace.

Have an idea for how to improve Paperspace documentation further? Please send us a note with any comments or suggestions!

All-new Linux SSH experience and improved machine create experience in Core! 🛫

We're excited to announce some brand new Core experiences! Let's jump right into what's new.

All-new Linux SSH experience announcement 

We've reconfigured the Linux machine create experience to optimize for connecting to Linux machines via SSH.

We feel that a direct connection to a Linux machine is a fantastic experience. We'll still support Linux VMs in the browser, but if you get a chance, give SSH a try -- it's so easy to connect!


Managing machines just got a lot better announcement 

We've also released a substantial cleanup of the machines settings page in Core which has made it easier than ever to access and manage machine settings. 

Let's say for example we wanted to create a snapshot of our new machine -- easy!

Or let's say we wanted to update our machine name and adjust the autoshutdown timer? Also easy!

We've also made it easier to do things like assign public IPs, generate templates, and more!

Redesigned account settings improvement

We've also updated the global Paperspace account settings to the latest design system standard. 

You'll now find tabs for Profile, Security, and SSH Keys and in general you should now find it easier to access these important settings.

Dynamic public IP addresses improvement 

  • We added support for dynamic public IP addresses which provide public IP addresses at a bare minimum of cost

Capacity upgrades improvement 

Meanwhile, we've also been busy adding plenty of capacity to Paperspace datacenters.

  • We onboarded a new fleet of RTX4000 machines to the CA1 region
  • We dramatically expanded GPU compute capacity in the NY2 region
  • We added nearly 100TB in shared storage across regions
  • And don't worry, we didn't forget about Europe! New capacity is coming soon!

Bugfixes fix 

  • We fixed a bug that was sometimes causing utilization graphs to display inaccurately



Introducing 100% self-serve private networks, shared drives, and public IPs! 🏄

We just made a number of improvements to help Core power users self-serve Paperspace resources. 

With this update, you can now create private networks, spin-up shared drives, and assign public IP addresses to any machines that you manage!

Self-serve private networks improvement 

First up, we're pleased to bring private networks to all Core users. When you create a private network, you create a shared resource pool for your team that is isolated from every other machine and customer on Paperspace.

Once you create a private network, you can add machines and drives to the network to share with team members.

Be sure to read the docs for more info!

Self-serve private storage improvement 

Next up, we've made it easy to share a drive among multiple Core machines. After you create a shared network, you can spin-up a shared drive and attach it to the network in a matter of seconds!

For more information on shared drives, check out the docs!

Self-serve public IPs improvement 

Finally, we've made it a lot easier to claim and assign public IP addresses! While previously it was possible to assign a machine to a public IP after the machine was created, we've now streamlined the process to make it more visible at the team level.

To claim a public IP, simply visit the Public IPs tab in the console and claim the address. (Note that Public IPs are region-specific.)

To assign the new public IP to a machine, all we need to do is use the Assign feature to select the machine we want to expose to the public web. That's all there is to it!

If you get stuck please read the docs to learn more or reach out to us with any questions. 

Bugfixes fix 

  • We resolved a troublesome issue that resulted in erroneous invoices being sent to a small number of users
  • We decreased errors related to over-provisioning on the Paperspace public cluster
  • We improved the strategy for guaranteeing hot nodes and faster startup times on the Paperspace public cluster
  • We fixed a number of small issues related to Windows 10 BYOL machines

All-new high-powered NVIDIA Ampere instances! 🔋

We're pleased to announce a series of new GPU-backed instances available on both Core and Gradient featuring NVIDIA's Ampere microarchitecture!

Introducing all-new Ampere instances! announcement 

Announced in mid-2020, Ampere is the codename for NVIDIA's latest line of GPU accelerator cards. Competition for these cards has been fierce and we're happy to bring you four flavors of Ampere, anchored by the top-of-the-line A100.

Introducing Ampere instances

In addition to the instances listed, we've also introduced 2-way, 4-way, and 8-way configurations for these cards. 

The full table of instances on Paperspace has been updated in the docs. In general, any instance made available on Core will arrive in Gradient shortly thereafter.

Multi-GPU also comes to Windows machines improvement  

One thing you might have noticed already is that multi-GPU instances in Core are no longer exclusive to Linux. You can now spin-up any multi-GPU instance on a Windows machine!

Check out the Paperspace console to get started. 

Model-backed deployments in Gradient Deployments improvement  

We added an important feature to Gradient Deployments: model-backed deployments! 

Gradient Deployments

It's now possible to inject a model at deployment runtime which means Gradient is now able to fetch a model from the Gradient model registry directly. Models can also be referenced from an external S3 bucket.

For more information, read the docs or reach out if you'd like a demo!

State persistence bugs in Gradient Notebooks improvement  

We made substantial improvements to the way that application and cell state is managed in Gradient Notebooks. 

Previously, if you navigated away from a notebook while a cell was running and then returned to the notebook, the cell would sometimes lose its state. We're happy to have implemented a substantial fix to this issue and a number of other issues influencing state management.

If you have feedback for us, please drop us a line!

Autosave, private notebooks, a number of bugfixes, and more! 🧑‍🔧

We've released a number of improvements and bugfixes for Gradient!

Gradient Notebooks now autosave by default improvement

We've improved the autosave functionality of notebooks! Whereas before only .ipynb files would save automatically, we now provide autosave functionality for all filetypes within notebooks.

Notebooks running on free GPU instances can now be private on Pro or Growth subscriptions improvement 

If you're on the Gradient Pro or Growth plan, notebooks that run on Free GPU instances can now be made private.

PyTorch container updated to version 1.10, TensorFlow container updated to 2.6.0 

We've updated both PyTorch and TensorFlow default containers in Gradient Notebooks to their latest stable release versions. The new runtimes are now available in the Gradient console. 

Other improvements

  • Gradient Deployments are now able to pull from models registered in Gradient
  • Overall GPU capacity has increased after addressing an issue related to read-only filesystems used by Gradient Notebooks

Bugfixes

  • We fixed a bug in the notebook create menu that sometimes caused the Workspace URL field not to update when selecting a new runtime
  • We fixed a bug in notebooks that sometimes caused deleted files to linger in the file management pane
  • We fixed a bug in notebooks that caused an empty file to be added to new directories
  • We fixed a bug that sometimes generated duplicate and triplicate notifications when switching teams


Accelerated Gradient Notebook startup and teardown 🚅

Gradient Notebooks just received a substantial speed boost during startup and teardown! As a result, you should experience snappier notebook starts and stops.

Notebooks now start and stop much faster improvement 

We've streamlined the architecture behind Gradient Notebooks to enable this improvement.  

What you need to know 

You will now need to install dependencies each time you start a new session. 

We recommend that you import libraries and dependencies at the top of your notebook or within a separate requirements.txt file.

Introducing Workflows and Deployments! ⚡️

We're pleased to announce Gradient Workflows and Gradient Deployments! Workflows and Deployments bring production capabilities to all Gradient users.

Make Anything with Gradient. Yes. Anything. announcement 

To celebrate this launch, we've released a brand new commercial about Gradient! 


Introducing Gradient Workflows announcement 

Gradient Workflows is a simple way to automate machine learning tasks. Workflows allows you to build complex, real-world machine learning projects. 

With Workflows you can define arbitrarily complex pipelines for Gradient to orchestrate on your behalf.

To learn more about Workflows, check out the new site


Gradient Deployments announcement 

Gradient Deployments provides effortless model serving. Deployments allow you to host a trained model on an endpoint for consumption by your application.

Deployments are powered by the same high-performance GPU instances that power the rest of Gradient.

To learn more about Deployments, check out the new site!

New instance types across Core 🐣

We've added RTX4000 and RTX5000 instances to CA1 and NY2 regions, as well as multi-GPU instances for Linux, and new low-cost CPU-only instances for Windows!

Introducing RTX4000 and RTX5000 announcement 

We're pleased to announce RTX4000 and RTX5000 instances are now generally available! 

These cards are based on NVIDIA's Turing microarchitecture and are more than 40% faster than their Pascal series counterparts.

Try RTX

Multi-GPU instances now available on Linux! announcement 

You can now access multi-GPU instances across all regions when selecting Linux as your OS!

P5000x2 instances start at $1.56/hr while P6000x2 instances start at $2.20/hr. 

Try multi-GPU


Low-cost Windows instances now available improvement 

We solidified CPU-only offerings for Windows instances and now provide C5 - C10 instances at an affordable hourly rate.

For just $0.08/hr you can run a full Core VM in the cloud!


Other Improvements

  • We improved our backend error monitoring capabilities giving us substantially more insight into performance degradation and remediation
  • We accelerated our equipment purchasing plan to provide new hardware faster to meet demand
  • We re-wrote some business logic around storage capacity to be able to deliver much faster upgrades


Show Previous EntriesShow Previous Entries