Cloud computing has given federal agencies newfound ability to run large-scale modeling and high-performance computing workloads that not that long ago required coveted time slots on supercomputers. But moving the massive data sets from agency data centers to the cloud still involves a lot of work. Data extraction charges can also be costly and until recently, latency was a significant issue.

In a recent podcast, ThunderCat’s Cloud CTO, Nic Perez, points to the success a large federal financial research organization had migrating 350 terabytes of historical data to the cloud, using NetApp SnapMirror and ThunderCat’s expertise.

This podcast was produced by FedScoop and underwritten by ThunderCat Technology, AWS and NetApp.

You can read the full summary here.