By Jeff Gowan
The business case for network virtualization is largely based on hardware and operating cost savings, along with expectations of greater flexibility and productivity. But how can operators of critical infrastructure be certain that their newly virtualized networks will deliver these benefits?
A big factor in determining the overall cost savings of a virtualization project is network operations. It’s a common assumption that operational efficiency will be inevitably achieved by moving from hardware- to software-based network implementations; and that’s certainly a logical premise. For starters, functions that require manual operation will be replaced by automated processes, which would reduce the time and resources needed to perform the same task. But there is much more to consider when it comes to ensuring cost-efficient operations.
Those who manage critical infrastructure – whether it is a communications network or industrial control system – must be able to deliver the same, if not superior, level of reliability and security via a virtualized network as they have over their legacy hardware-based systems. Their ability to do so cost effectively depends on the capabilities of the virtualization platform they choose.
Critical infrastructure operators must deliver very high availability to avoid costly downtime in their operations, and in some cases owing onerous penalties for missing service level agreement (SLA) targets. Achieving such a high caliber of reliability requires complete visibility into what’s happening in the network all the time as well as instant notifications and automated responses when faults or performance issues arise.
Another crucial aspect of operations is security. There are significant security implications as critical applications are virtualized and delivered from the network edge, rather than from a centralized data center. Edge or on-premise locations are physically less secure than a massive data center. That means security needs to be built into the hardware as well as baked into the software, so that operations staff can effectively protect the network and the applications running on it.
But virtualization platforms that are built using the latest OpenStack distribution or that were originally designed for an IT network environment are not optimized to deliver these basic network monitoring and security capabilities.
That’s where the Wind River Titanium Cloud portfolio of virtualization platforms comes in. Titanium Cloud is designed to meet these carrier-grade and industrial control system requirements so that operators can indeed achieve operational cost savings from virtualization implementations.
In terms of performance and fault management, the platform provides a remote monitoring dashboard that shows system alarms, analytics, and tools, so that that operations teams are aware of issues before they can affect services. Importantly, measurements are collected for physical nodes and resources as well as virtual hosted resources. The platform also comprehensively logs all significant events. In addition, there are extensive management interfaces with open APIs that can be integrated with orchestration engines.
When it comes to security features, Titanium Cloud is comprehensive. At the hardware level, the platform provides Transport Layer Security (TLS) with certificate storage in Trusted Platform Module (TPM) 2.0 hardware to protect management operations. It also features the virtual TPM, which secures the platform software just as a guarded data center protects physical equipment. Titanium Cloud leverages Intel’s Enhanced Platform Awareness and supports Unified Extensible Firmware Interface (UEFI) secure boot, cryptographically signed images and network-level Authentication, Authorization and Accounting (AAA).
We’ve only scratched the surface here on how Titanium Cloud is optimized for cost efficiency to deliver reliable, secure critical applications. For more detail, please see our recent white paper and video series, “Virtualization The Easy Way.”