Enterprise Big Data Clouds, Open Ecosystems, DC Architecture, 4G Wireless, and More!

(Bi)-Weekly Round-Up: 12/01/2014 – 12/14/2014


Here’s bringing you the second of the Bi-Weekly Round Ups. This period saw over 33 discussions, 57 comments, from 17+ unique contributing members, and over 42 likes, for an equally active two weeks!

Thank you Anubhav Oberoi, Mirko Jakovljevic, Jules Pedersen, Nir Cohen, Tom Nolle, Ray Le Maistre, David B, Joshua Parnes,  Patrick Lopez, Ryan Yaeger, Asad Naveed, Bruno Guigre, for your posts on diverse topics (more on these ahead)! Thanks also to Marcus Friman, Ravinder Shergill, Raghu Ranganathan, Mirko Jakovljevic, Asad Naveed, Azhar Khuwaja, Yaron Banai, Scott Raynovich, Tom Nolle, Anuradha Udunuwara, &  Chandra Mallela, among others, for your insightful and thoughtful comments.

MP3 Audio Capsule of the Weekly Round-Up is at: http://bit.ly/1J9TDzS  (Right click the link and choose “Save File As” to download and save the mp3 file. Then enjoy on your favorite player!)

A clickable mindmap of the top discussions for this period. Clicking on the image, will open a new browser window that will allow you to click to the various discussions through the mindmap, thus making it easier to access the discussions referred to in this Bi-Weekly Update. Please don’t forget to tell us how you liked this feature, and what else we could do to make it easier for you to keep learning!

1. Privatizing Enterprise Big Data Clouds!

 The Impact of Big Data…As Much in 2 Days as All of Humanity Produced By 2003!

Joshua Parnes http://bit.ly/1swgul2 posted a fascinating infographic illustrating the impact of  Big Data, which is expected to double every 2 years, reaching a staggering 40,000 exabytes by 2020! Today, we produce 5 exabytes of data in just 2 days, the same amount all of humanity produced from the start of time until 2003! Over 68% of this comes from consumers, the rest is businesses. The key point is that although 25% of the data has value, only 3% has been analyzed, implying there is massive untapped (pun intended!) opportunity in big data, and the network (especially the Carrier Ethernet network) will have to evolve to meet this. Find out other fascinating Big Data facts here http://bit.ly/1swgul2!

Clearly, network infrastructure will have to evolve to allow for the transport and storage of such massive amounts of data, and its movement for analysis and retrieval. This will require continuous innovation in telecom architectures, keeping all of us in business for a long, long time :-)

Are you working in Big Data? Is your organization wondering what to do about or with Big Data, and how you might benefit from it? Then do share your thoughts, and let’s understand what your questions are! Also, data scientists are a new breed of specialists, and if you’re one or playing the role of one, we’d love to hear your views on how this is changing the telecom landscape! Please do share your views in the comments http://bit.ly/1swgul2, or on the Group!

 What This Means for the Modern CIO …

 Meanwhile Jules Pedersen http://bit.ly/1AbahK9 posted about the impact that CIO’s have on business flows in their respective organizations, and how they are meeting this role (together with trying to tame the Big Data tiger above!) by creating hybrid clouds – on-premise infrastructure for critical company profit center-oriented workloads (e.g. think design and engineering for say, a Broadcom or Intel)  and the cloud for core enterprise application workloads (think, ERP, CRM, HR, etc.) . This is really about shortening the business innovation cycle, getting products and services to market and to the customer faster. An important take away being that all of our focus as network engineers and architects needs to be on creating a better way for enterprises to seamlessly handle these workloads!

Do you think we, as an industry, are meeting this challenge? Are you at a service provider catering to the enterprises (large or small)? If so, how are you ensuring that you offer the best network (thus, the best experience) possible for enterprise apps to work smoothly? Share your thoughts here http://bit.ly/1AbahK9!

AWS On-Premise – Really? … and Wearable Biometric Threats BYOW!?

 In other developments, AWS is apparently not getting into the on-premise cloud management business http://linkd.in/1GIWcWq. Rather, they wish to work with private cloud vendors to ensure compatibility between private cloud software tools and AWS’s public cloud. And, they are offering “virtual private clouds” – dedicated infrastructure reserved for individual clients, but hosted on AWS, obtainable at a premium.

Meanwhile, wearable technology such as Google Glass and other biometric monitors (think FitBit) are creating an enterprise security and privacy threat (whose liable when a wearable inadvertently records an audio/video of a private conversation at the company and uploads it to the cloud, purely by accident? And, what if a hacker saw the workplace through your eyes by hacking your Google Glass – passwords,  confidential documents, product drawings in the making –  all without the wearer even knowing – wow!) http://bit.ly/13785rR, which demands a new look at network design and security policy (another great area for folks on the Group to devote their energies to learning!).

So, do you think the smart glasses and fitness trackers (which apparently 25% of adults in some developed markets now use, as per some surveys) present a real enterprise security risk? Has your company been a victim or a provider of services to combat this threat? What design techniques did you consider? Do share your views in the Comments or here http://bit.ly/13785rR!

2. Packet Technology Analysis

How Do You Test TCP QoS? …

 Marcus Friman, VP Products at Netrounds, posted an excellent paper on techniques for testing TCP QoS, and insights into the shortcomings of RFC 6439 for testing QoS-enabled connections http://bit.ly/1BLmKW3, which Chandra Mallela and Azhar Khuwaja had some insightful observations on.  Note that RFC 6439 describes a method for measuring end-to-end TCP throughput in a managed IP network. QoS testing will become increasingly important, across the metro, WAN and cloud infrastructure, so something we should all be paying attention to. Marcus listed some common ways to test TCP QoS. E.g. Add ICMP ping or UDP to one QoS class to see how latency/response times vary, running UDP in multiple classes simultaneously to overload the bandwidth, and verify that packets are dropped from the lowest priority class first, and verify that packets are received with the correct DSCP/PCP at the other end.

Are you a carrier architect or operations person tasked with testing QoS on virtual circuits? What techniques have you used and found most valuable? What are some of the areas where you’ve experienced difficulty? Do share them here http://bit.ly/1BLmKW3.

… Deterministic Ethernet in Space  …

Mirko Jakovljevic posted that the NASA Orion test flight used deterministic Ethernet http://bit.ly/1uOSHrP, which is the notion that QoS has fixed/hard parameters, as opposed to having probabilistic indicators. A network with the latter needs careful design. Deterministic Ethernet simply adds a synchronous traffic class that is emulated by using current asynchronous Ethernet capabilities. The bandwidth dedicated to the synchronous traffic class has deterministic QoS parameters (nearly constant latency with extremely low (microsecond) jitter), while the remaining bandwidth operates using probabilistic QoS parameters.

I finally wrote a long explanation to questions raised by Chandra on this thread, and you can check it out here http://bit.ly/1uOSHrP.

Do you have a thought on deterministic Ethernet and how it relates to Carrier Ethernet? Is deterministic Ethernet a subset of Carrier Ethernet? Find out here http://bit.ly/1uOSHrP, and share your inputs!

… and CE 2.0’s Impact on Service Providers on Terra-Firma!

Bruno Giguere of EXFO meanwhile held an excellent webinar with Stanley Perrin of HeavyReading to discuss the state of CE 2.0 http://bit.ly/1BLmKW3 – the new standards for OAM and management approved by the MEF, and to what extent they have been adopted by operators. The webinar shares some interesting insights (e.g. 40% of attendees surveyed still used only a single QoS class), which I would encourage you to check out in the replay.

Are you an operator contemplating CE 2.0 implementation? Are you still trying to figure out what CE 2.0 is all about and the benefits it brings? Do you have questions on this whole issue? Then we want to hear from you, so check out the webinar and do comment here http://bit.ly/1BLmKW3!

3. SDN/NFV Switching … to … Open Ecosystems!

Hardware vs Software Switching for NFVI…

Ryan Yaeger posted an insightful piece by Kelly LeBlanc of 6WIND (who I had the pleasure of meeting in person at the SDx Summit, in Palo Alto in December)  http://bit.ly/1zgU3Rx on the importance of not forgetting performance in the euphoria about virtualized software switch solutions on commodity hardware (NFVI) replacing previously expensive hardware solutions. She argues that adding acceleration to the embedded host software in a COTS platform (used for compute virtualization) can boost aggregate bandwidth on a COTS server to 240Gb/s, while providing a hardware independent architecture.

Are you implementing virtualization on a COTS platform in a data center or the enterprise?  What are some of your key questions when evaluating a hardware vs software-based solution? Do ask them here http://bit.ly/1zgU3Rx, so we can have Kelly and her team answer them.

… And Moving NFV to the Field …

Tom Nolle wrote an insightful piece http://bit.ly/1zXRrrM asking what it would take to move NFV into the “field”, into large production environments in huge numbers (he’s talking 80,000 to 130,000 new data centers created by NFV). Tom outlines a “first NFV strategy” for a capex driven carrier, which is promoted by two factors – the Carrier Ethernet and cloud computing opportunities. By contrast, for a service agility driven carrier the key  is to not get into a “silo NFV” situation, where they’re limited by their vendor supported choices to deploy multiple NFV platforms. Finally, for an operational efficiency driven carrier the road is tougher, because NFV operations and management are still immature and not well articulated. The carrier will need to understand NFV element management and how it ties to their overall network management. See Tom’s post http://bit.ly/1zXRrrM for the best approach to NFV implementation.

So are you primarily service-agility driven, capex-driven, or operational efficiency driven? Is your market a mature one or one that’s still evolving with ability to deploy many new services? What is your view about how NFV may be moved to the field, in your particular environment? Do share your view here http://bit.ly/1zXRrrM.

Tom also wrote about HP’s OpenNFV strategy http://bit.ly/1vSKQJd, which they view as “an application of cloud principles to the hosting of network functions.” OpenNFV is an open initiative to build an eco-system where HP provides a platform (modeled after the ETSI NFV framework) on which partners add/extend functionality by providing VNFs (virtual network functions) and NFVI (network functions virtualization infrastructure), thus preventing an NFV silo. Read the details in Tom’s post http://bit.ly/1vSKQJd.

Are you an operator contemplating NFV?  Are you part of executive management and still wondering about the business case? Can’t see the ROI? Then feel free to share your thoughts/concerns here http://bit.ly/1vSKQJd, and ask us for help navigating these NFV waters!

… With an OPEN Eco-system for SDN/NFV…

Roy Chua, Co-Founder at SDNCentral (now SDxCentral), posted about Open: an ecosystem for SDN/NFV http://bit.ly/1BLjslN, by Mathew Palmer. Matt argues that SDN and NFV are making partnerships between vendors critical for the industry’s development as well as the evolution of the vendors themselves. It’s no longer just “Cisco” and “everyone else”! In fact, it’s three: Cisco-centric ecosystems, EMC/VMwave-centric ecosystems, and the Open-Centric ecosystem, which includes 300+ companies in the SDXCentral Directory building businesses off of open source, open standards, and open APIs. This latter ecosystem is ripe for driving innovation, and Matt provides a very nice elaboration for who can benefit from such innovation. Check it out here http://bit.ly/1BLjslN.

Are you a vendor in the open-ecosystem? Do you agree with Matt? What do you think are the pros and cons of the “open” ecosystem? Is it there to stay? Would it dominate the other two ecosystems over time? Do make your voice heard! Here http://bit.ly/1BLjslN.

… Leading to the Highest Performing SDN Software Switch? …

Anuradha meanwhile pointed to the open source Lagopus Switch http://bit.ly/1waMomI, which is apparently a scalable, high-performance, elastic, software OpenFlow Switch for wide-area networks! Whew, that’s a mouthful. But, I’m not sure if it’s the world’s highest performing SDN software switch? I guess given that it’s the only one working with OF 1.3 that may be true.

4. Network & Data Center Architecture & Service Models

FB’s Data Center Redesign … Implications for the Industry …

 Anubhav Oberoi posted about http://bit.ly/1zTSr0b a GigaOM article arguing why Facebook’s redesign of its data centers matters (technically that is! Any time FB redesigns anything, it matters!). FB announced the “data center fabric” concept that was explained in an iTunes GigaOM podcast hereThe “fabric” allows three things: (a) maximizing data-center space, by shifting from deploying server-clusters to a core-and-pod design, which allows pods (which is a unit of compute comprised of a collection of servers or racks) to be deployed incrementally until physical space or power are exhausted (of which power is the more immediate constraint!); (b) encouraging vendor innovation – so that it aligns with FB’s vision of the fabric, which is designed to use different solutions from different vendors; (c) improved networks, infrastructure and applications: that will free up application developers from the constraint of hitherto working in cluster-type environments. E.g. Allowing better operation of FB’s in-memory flash layer Memcached, which because of it’s “chatty nature” needs a low-latency, high-bandwidth network, something provided by the new architecture.

… A Burger-King Model? …

Scott Raynovich wrote about http://bit.ly/1zoMIye David Hughes (VP Engineering, PCCW Global) “have it your way” analogy for how telecom network operators need to provide services to customers. With highly flexible and adaptable cloud services, the customer is now demanding more, better, cheaper, and customized! The cloud will do to telecom what Salesforce did to enterprise software – provide services that are cheap, mobile, and scaled to meet your needs (indeed, sounds like the NaaS vision Nan (Chen) has been advocating for some time now :-) ).

So, are you ready to have it “your way”! And, if you’re an operator or telco, are you prepared for the change that has already arrived? Do you think that will unseat the traditional operators? Share your hopes, fears, and dreams here http://bit.ly/1zoMIye, and let us see if there’s a way to help you prepare to meet this consumer change!

… The Virtual Elephant in the Room or the Real Elephant Virtually Impossible to Ignore! …

Yours truly posted a summary article http://bit.ly/1qSRu6D from a NetEvents debate on the value of SDN/NFV to help operators prepare to serve the “third platform,” defined by IDC as the combination of mobility, cloud, big data, and social business. This requires telcos to both drive cost out of the business while deploying new platforms to improve services – a paradox, the unacknowledged elephant in the room! Panelists from CENX (Chris Purdy), Colt (Nico Fischbach), and Juniper (Nigel Oakley) weighted in on this issue that you can read about here http://bit.ly/1qSRu6D.

… A Bridge to the Network of the Future? …

Tom Nolle wrote a two posts addressing this subject. http://bit.ly/1waNsav, and http://bit.ly/1sudss3. The key point is that networks are built by ROI, and both networking and IT are changing rapidly because of the aforementioned trends (see above and Scott’s post). This may, as Tom posits, lead to an optical network foundational layer with a virtual services layer on top, all built using agile optics, SDN, and virtual switching/routing, with traditional Layer 2 and Layer 3 devices replaced by virtual behaviors, realized in the cloud and elsewhere. Read some great analysis here http://bit.ly/1waNsav, and an evaluation here http://bit.ly/1sudss3 of how services of the future would be built – by organizing services horizontally via federations of controllers, and vertically via layers of network technologies. Of course, all of this would have be orchestrated, a notion that relates to the MEF’s Lifecycle Service Orchestration (LSO) initiatives.

Finally, with “service agility” and “service velocity” being bandied about pretty freely now-a-days, it behooves us to look a little bit deeper at this concept, which Tom does here http://bit.ly/16oCUut. The service lifecycle, per Tom, has 4 key parts: opportunity and service conceptualization, technology validation and costing, field operations and benefit validation, and deployment. The question is which of these can SDN/NFV speed up? If assembling a new service involves weaving functional components in creative ways, NFV and service chaining could help the architect with service conceptualization. With a DevOps philosophy and proper orchestration tools, running a technology trial to test a functional service structurally could be made easier. But operators have an advantage over OTT’s in that they sell services that consumers pay for, as opposed to relying on adspend as OTT’s do, but they do need to make the delivery pipe that they provide profitable! Read more here http://bit.ly/16oCUut and http://bit.ly/1GIN5VR.

… But How Reliable in Terms of the Nine’s!? …

Tom also asks a very pertinent question http://bit.ly/1swdANl: what does Carrier-Grade mean in the context of SDN and NFV?

The only requirements for “carrier grade” he observes are that SLA violations are prevented, and that failures don’t drive up opex beyond an acceptable limit http://bit.ly/1swdANl. Thus, we really need availability that fits these two objectives, since the absolutes of “fine nines” (with roots in TDM operation) had already fallen by the wayside when we moved to an IP infrastructure, according to Tom (and I agree).  We will, however, need carrier-grade servers that have high availability (high enough to meet the two objectives above), and techniques for redundancy and failover in the network that guarantee SLAs in a manner proportionate to the price of the service, and not measured by some arbitrary, absolute standard.

What do you think of when you think “carrier-grade”? Have you ever actually received five nines reliability? Do you think there is any point in sticking to this outdated notion (even as we love talking about it!)? Weigh in here http://bit.ly/1swdANl!

5. 4G Wireless Usage and Monetization

Patrick Lopez of Core Analysis, published his 2014 Video Monetization and Optimization Market Share Update http://bit.ly/1uOOiVJ, while Informa’s David Baker posted on the slower, but dramatic rise of 4G usage in Europe, http://bit.ly/1swgdyE.

6. Industry Goings-On …

Finally, in the industry round-up: Veryx explains the MEF’s “Third Network” http://bit.ly/1zoNqLN, and presented a seminar on CE Wholesale Services, which you can check out here http://bit.ly/137ahQa. Asad Naveed posted about the CE APAC conference http://bit.ly/1GpfmSO, while we had a very successful SDx Summit (which I had the privilege to chair and host two panels at) http://bit.ly/16oCbcM at the Carrier Network Virtualization 2014 event in Palo Alto on Dec. 9th. And Ray Le Maistre of  Lightreading posted interviews with Michel Combes, CEO Alcatel-Lucent http://bit.ly/1wyzlM1, and with Bell-Labs President Marcus Weldon (ALU’s CTO as well) http://bit.ly/1wyxN4w, which are worth a look!

Welcome your views on all of these, and do take the time to contribute to and learn from those topics that align with what you are working on or interested in at present. Also, for the auditory learners among you (and I am one myself, listening to a few hours of technical content each week!), I’ve made this update available as an mp3 podcast, downloadable from the link below – so happy listening!

BTW, here is the MP3 audio link again http://bit.ly/1J9TDzS.

Would love your feedback on how you like the new formats, and what else we can do to make this more valuable for you. Until next time, may the bits in your byte and the bytes in your packets be profitable!