Category Archives: Technology – Insights

The New Employee Perimeter

The World Turned Upside Down

Historically, corporate networks were designed for employees seated in physical offices. Vendors like Cisco and Juniper built large companies around these “campus networks”. Employees physically connected their desktops to the network and accessed applications run in on-premise data centers. With WiFi-enabled laptops users got a taste of freedom; so enterprises deployed Virtual Private Networks (VPNs) for access back to on-premise applications from homes and coffee shops. And then many of those applications moved to the cloud. Yet most organizations still have security architectures from the time of tower computers attached to physical ethernet ports.

Cisco’s recent earnings provide the best empirical evidence that network architectures are being inverted by the COVID-19 pandemic. Sales of networking infrastructure is down, while products like VPNs are up significantly. With the world sheltering at home, suddenly remote workers are the rule, not the exception. This shift outside the traditional network perimeter, which began long before COVID-19, presents an opportunity for organizations to finally upgrade their network and security architectures for the way people work today, rather than 20 years ago, and protect themselves against the most common threats.

The New Rules

The new rules for user and network security must assume that the network is temporal, that users are as likely to be on the office network as their WiFi at home. And the devices they use to do work – whether it’s checking a sales forecast or reviewing the financial model for an acquisition – happen across a heterogeneous mix of operating systems and form factors. Why focus on end user security? The vast majority of attacks now prey on the weakest link in IT security: people. A spear phishing email lands, a user clicks through a link, a threat is persisted, ransomware and IP theft proliferate, followed by pandemonium. To make matters worse, these attacks are seeing a dramatic increase during the COVID-19 seeded confusion.

Rule 1: Users access resources (mostly) in the cloud. Enterprise applications now live in the cloud, but the implications need to be operationalized. There’s no need to send SaaS traffic through the VPN back to a corporate network; it reduces performance and increases bandwidth costs. Intelligently route the traffic to where it needs to go, don’t bludgeon it into a VPN.

Rule 2: Users leverage a diverse collection of devices. Employees expect a corporate device (laptop) and the ability to add their own devices, such as smartphones and tablets. IT departments must assume this is standard behavior and deploy solutions and policies that support this expectation.

Rule 3: The network perimeter shrinks to the end user. You aren’t going to ship Salesforce a firewall and ask them to install it in front of your CRM instance, so your protections need to be rooted in the end user’s experience. Threats need to be detected and mitigated at the end user level, especially in a post-COVID world where family members are likely sharing devices to accomplish distance learning, remote interactions, and the like.

Rule 4: Consider the implications of Bring Your Own Pipe (BYOP). With everyone working from home, the attack surface has been skewed. Home routers, unpatched inkjet printers, security cameras, and smart televisions all represent vectors by which an attacker can compromise valuable intellectual property. Only three months ago Netgear announced a number of critical vulnerabilities affecting popular consumer routers. It’s critical that organizations understand the environmental threats that exist surrounding their end users.

Some of these rules began to emerge well before COVID-19 became a pandemic, first with Google’s BeyondCorp project and then the larger security industry trend towards Zero Trust (which is currently trapped in marketing buzzword purgatory). But their adoption has been slow relative to the expansion of traditional enterprise networking, especially in organizations that were not born in the cloud, and that must change. 

Check Your Posture

To succeed in this new world, organizations must embrace simplified user and device posture-centric security. There are two domains of focus: the end user and the resource they are interacting with. Rather than binary decisions, solutions should consider key variables to decide if access is granted, and access can be allowed on a granular level (e.g. a user on a personal device on a guest network can access their own HR data, but not company intellectual property).

  • User authentication: leveraging a Identity as a Service offering, make sure the user is who they claim to be – including multi-factor authentication (MFA), passwordless authentication, etc.
  • Device posture: is the device a corporate managed device? Does it have the latest software updates and patches installed? Has a recent anti-malware scan completed with a clean bill of health? 
  • Normal user behavior patterns: does this user normally access these resources, at these times, from these locations? Did the user just appear to access other resources from a geographical location that is impossible to reconcile?
  • Target: what is the enterprise value of the asset the user is attempting to access? Does it contain proprietary information, personally identifiable information? Is it a known attack vector that is unpatched and vulnerable?

By leveraging the above items, policies can be dramatically simplified away from complex and antiquated network-centric policies. 

New World Order

By deploying solutions that answer these questions, organizations can build protective moats around their end users and minimize the damage done by an attack. Organizations can also begin to treat all users as equals, regardless of the device or network they’re operating from. COVID-19 has led to new working norms, and we must embrace the new rules for end user-centric security to keep information and employees safe.


The New Employee Perimeter was originally published on LinkedIn.

Source: https://www.linkedin.com/in/todd-graham-720276/

4 Enterprise Developer Trends That Will Shape 2021

Technology has dramatically changed over the last decade, and so has how we build and deliver enterprise software.

Ten years ago, “modern computing” was to rely on teams of network admins managing data centers, running one application per server, deploying monolithic services, through waterfall, manual releases managed by QA and release managers.

Today, we have multi and hybrid clouds, serverless services, in continuous integration, running infrastructure-as-code.

SaaS has grown from a nascent 2% of the $450B enterprise software market in 2009, to 23% in 2020 and crossed $100B in revenue. PaaS and IaaS revenue represent another $50B in revenue, expecting to double to $100B by 2022.

With 77% of the enterprise software market — over $350B in annual revenue — still on legacy and on-premise systems, modern SaaS, PaaS and IaaS eating at the legacy market alone can grow the market 3x-4x over the next decade.

As the shift to cloud accelerates across the platform and infrastructure layers, here are four trends starting to emerge that will change how we develop and deliver enterprise software for the next decade.

1. The move to “everything as code”

Companies are building more dynamic, multiplatform, complex infrastructures than ever. We see the “-aaS” of the application, data, runtime and virtualization layers. Modern architectures are forcing extensibility to work with any number of mixed and matched services.

Traditionally we have relied on automation and configuration tools such as scripts and cron jobs to manage and orchestrate workflows between different systems and services. e.g., run a data pipeline or provision a new server. These tools have been based on a series of daisy-chained rules that lack real versioning, testing or self-healing, and require constant DevOps intervention to keep running.

With limited engineering budgets and resource constraints, CTOs and VPs of Engineering are now looking for ways to free up their teams. Moving from manual, time-consuming, repetitive work to programmatic workflows, where infrastructures are written as code and owned by developers.

Companies like HashiCorpBridgeCrew and exciting new open-source projects are introducing new ways of building, managing and operating every layer of the developer stack using the same version-controlled, immutable, maintainable, programmatic patterns we have grown so accustomed to in software development.

2. Death of the DevOps engineer

DevOps has been a transformational approach to the modern developer stack. Teams can develop, deploy and rapidly iterate without the traditional friction of release managers, waterfall builds, DBAs, siloed departments and more. It has led to innovations powering faster, more scalable software delivery.

The ethos behind DevOps was always to bring developers and operations together. Any changes to infrastructure were to be developed and released as code and made available for use by any developer. If done correctly, operating the infrastructure and releasing software could be treated the same as any other codebase.

DevOps was always intended to be an approach, not a role. Unfortunately, we went awry somewhere. Today we have teams of DevOps engineers as large as the application or data teams. The bifurcation into a role led to a number of unfortunate side effects. DevOps engineers have become gatekeepers of the infrastructure. Much of their role is taken up by just keeping infrastructure running. Setting up new clusters, environments or pipelines still require a DevOps engineer to deploy. Deploying new infrastructure means scaling up DevOps headcount. Whether it’s the pace of change in the modern developer stack, some level of role preservation or some combination of the two, the result is a widening division between developers and operations.

That is about to change. We need to go back to DevOps as a way of bringing the building and delivering of software together, not separate.

Similar to what we saw in the evolution of the QA role folding into the developer role as functional and unit tests became standard, DevOps will follow the same path. In a resource and budget-challenged engineering organization, every available headcount will be for developers pushing code and creating robust software. That doesn’t necessarily mean the positions will be eliminated, but the ethos and definitions of their roles will change.

The result will be to go back to having only developers. There are application developers who build and monitor new services, Infrastructure developers who deploy and monitor new infrastructure, data developers who create and monitor new data flows. All enabled by tools and services that allow developers to leverage infrastructure as code and manage “operations” as a feature. The future will bring “DevOps” back to its original ethos, and give birth to the infrastructure engineer focused on building infrastructure through code.

3. Introduction of the virtual private cloud as-a-Service

The long-standing challenges enterprises face with managed services (SaaS/PaaS/IaaS) run on public clouds are its multitenancy nature.

An enterprise’s data is increasingly one of its most strategic assets. The risks of data leakage, security breaches, regulatory or security concerns and costs have driven enterprises to introduce hybrid environments. Hybrid clouds are combinations of public clouds for specific services and storage, and private clouds — configurable pools of shared resources that are on-premise to the enterprise.

The virtual private cloud (VPC) option has emerged as an alternative to meeting the data security and performance challenges that face enterprises. VPCs are isolated environments within a public cloud instance, meaning a private cloud within a public cloud without the IT and operational overhead of bare metal and resource management. With VPCs, enterprises can take advantage of public cloud benefits of on-demand infrastructure and reduced operational overhead, while maintaining data, resources and network isolation.

But VPCs aren’t the end-all solution. VPCs often don’t have all the features or capabilities of the regular public cloud SaaS offering, given the isolated instance and base. Monitoring and uptime are specific to each instance — meaning if your VPC is down, it’s likely not the case for other customers, often leading to slower resolution times. The age-old “cattle, not pets” methodology for managing servers is to leverage dispensable systems that have immediate redundancy and failover. It only works for a system with resource flexibility, which a VPC by nature does not.

Welcome to the era of the VPC-as-a-Service. A fully managed environment and service, offering the performance, reliability and scale of a multitenant public cloud service, but with the data security, namespaces and isolation of a virtual or on-premise private cloud. These services will offer network isolation, role-based access management, bring-your-own SSO/SAML, and end-to-end encryption, but operate like cattle, not pets. Offerings such as MongoDB’s Atlas have become the future reference architecture for enterprise-friendly “-aaS” offerings — performant, reliable, scalable and ultra-secure.

4. The new age of open-source infrastructure unicorns

As technologies and architectures shift or grow stagnant, the open-source community is often the catalyst for new methods and approaches. The last 10 years have seen a remarkable focus and reinvention of the data, runtime, middleware, OS and virtualization layers. As a result, the open-source community is responsible for creating many billion-dollar companies, including: Confluent, Databricks, Mulesoft, Elastic, MongoDB, Cockroach Labs, Kong, Acquia, Hashicorp, Couchbase, Puppet Labs, WP Engine, Mapbox, Fastly, Datastax and Pivotal.

Today the cloud still only accounts for a fraction of the $450B enterprise software market. As the cloud continues to evolve, so do the reference architectures that sit on top of it. Five years ago, we couldn’t conceive of serverless services, within containers, across clouds, segmented into microservices, scaled on-demand.

As architectures shift, so do budgets. In the coming months and years, we will see a new wave of migrations off legacy, proprietary and on-premise systems that are becoming choke points or single points of failure.

These migrations will specifically focus on the infrastructure, network, storage and data flow layers — the systems that power our services that haven’t been touched in years, with IaaS and PaaS offerings. From legacy ETL systems, data services, gateways, network and storage management, CASB and WAF, to the reinvention of vertical services from the likes of Oracle,  SAP, SAS and IBM. IaaS and PaaS will represent the fastest-growing segments of the cloud.

Enterprises will demand the freedom to choose to mix and match managed and hosted services. They will demand lower total costs of ownership that are cloud-native by design, can modularly fit into any environment and are written as code.

Open-source projects (and their commercial developers) such as Druid (Imply), Arrow (Dremio), Flink (Ververica) and others will emerge as the open-source leaders to power the infrastructure for the next generation of enterprise software.

2021 and beyond

Companies such as Hashicorp and MongoDB are just scratching the surface of these emerging trends. As the Fortune 5000 continues to accelerate spend to the cloud, so will the emergence of a new generation of companies that modernize the way software is delivered. We will finally say goodbye to the days of complex multi and hybrid cloud orchestration, segmented DevOps teams, manual remediation and constant operator intervention. We will say hello to programmatic infrastructures, expressed in end goals and desired states, built by developers for developers.

It’s time to shift focus from delivering software to delighting customers. Welcome to a new era of modern software delivery.


4 Enterprise Developer Trends That Will Shape 2021 was originally published on TechCrunch.

Source: https://techcrunch.com/

Setting the Stage for a Commercial Era of Quantum Computers

Quantum computing has always fascinated me. Since reading Physicist David Deutsch’s Dream Machine article in the New Yorker almost a decade ago, I’ve been enthralled with the possibility of a technological reshuffle, unlocking a new dimension of answers about ourselves, nature, and the universe.

As a lifelong student of physics and computer science, quantum computing represents one of the most critical technologies to support our longevity as humanity. As a Partner at Venrock, a venture capital firm, quantum computing represents the most fundamental speed up in computing power that we have ever encountered, and potential to transform almost every industry and vertical.

The challenge is that quantum computing is always “five to seven years away.”

Always five to seven years away

Every major technology of our era has gone through a perpetual cycle of scientific discovery, engineering for commercial use, and manufacturing for production scale. As new advancements or improvements are discovered, they progress through the same cycle.

  1. Scientific discovery: The pursuit of principle against a theory, a recursive process of hypothesis-experiment.
  2. Engineering: Success of the theory and proof of principle stage graduates to becoming a tractable engineering problem, where the path to getting to a systemized, reproducible, predictable system is generally known and de-risked.
  3. Production: Lastly, once successfully engineered to meet performance, focus shifts to repeatable manufacturing and scale, simplifying designs for production.

Since the theorizing of computing leveraging principles of quantum mechanics by Richard Feynman and Yuri Manin, quantum computing has been in a perpetual scientific discovery state. Reaching proof of principle on a particular architecture or approach, but unable to successfully overcome the engineering challenges to move forward.

These challenges in quantum computing have stemmed from lack of individual qubit control, sensitivity to environmental noise, limited coherence times, limited total qubit volume, overbearing physical resource requirements, and limited error correction.

Welcome to the engineering phase

After years and decades of research and progress, we are entering a new phase.

In the last 12 months, we have seen several meaningful breakthroughs in quantum computing from academia, venture-backed companies, and industry, that looks to have broken through the remaining challenges along the scientific discovery curve. Leading quantum into now becoming a tractable engineering problem.

Just in the last six months, we have seen: Google demonstrated a theoretical path to ‘quantum supremacy.’ A new record set for preparing and measuring qubits, inside of a quantum computer without error. Honeywell announced it will “soon release the most powerful quantum computer yet.” Intel announced “Horse Ridge”, their cryogenic control system-on-chip (SoC). The list goes on. All share similarities in the breaking through the challenges related to individual qubit control and coherence times; meaning creating qubits that can be usable and for long enough time to do something meaningful.

At the same time, modern computing industry leaders have been setting up the foundation for future quantum applications. Microsoft’s Azure Quantum and Amazon’s AWS Braket announced managed cloud services enabling developers to experiment with different quantum hardware technologies. Google launched TensorFlow Quantum, a machine learning framework for training quantum models. Baidu, IBM, and others have followed suit. The pace of releases and announcements are accelerating.

The first generation of commercial quantum computers

Unlike in previous years, we are seeing real results. Tens of qubits, with individual control, at meaningful coherence times, demonstrating exciting capabilities. We are at the brink of a new generation of companies, both new and old, introducing commercial-capable quantum systems that will set the stage for the industry for years to come.

Companies such as Atom Computing* leveraging neutral atoms for wireless qubit control, Honeywell’s trapped Ions approach, and Google’s superconducting metals, have demonstrated first-ever results, setting up the stage for the first commercial generation of working quantum computers.

While early and noisy, these systems, even at just 40–80 error-corrected qubit range at 99.9% fidelity, may be able to deliver capabilities that surpass those of classical computers. Accelerating our ability to perform better thermodynamic predictions, understand chemical reactions, improve resource optimizations, and improve financial predictions.

This means pharma companies will be able to simulate chemical reactions in pursuit of a new vaccine that is not feasible on a classical computer. Solar cell manufactures can surpass the current modeling capabilities for improved conversion efficiency; even a few percentage points can be significant tipping to a fully renewable future. Manufactures dealing with complex assembly lines (auto, pharma, microelectronics), which often are large combinatorial problems, could yield significant improvements in yield and production efficiency.

Examples of hybrid algorithms for 40–80 qubit quantum systems:

1. QAOA (quantum approximation optimization algorithm): a hybrid quantum-classical approach to polynomial and super-polynomial time algorithms for finding “a ‘good’ solution to an optimization problem. Examples of this are traveling salesman optimization problems, where the value of merit is the ratio between the quality of the polynomial-time solution and the quality of the true answer.

Use cases include network optimization, machine scheduling, max cut solver, operations research, image recognition, and electronic circuit design.

2. Quantum Boltzmann ML: Boltzmann machines are particularly well suited for quantum computing architectures because of their heavy reliance on the use of binary variables, specifically in solving challenging combinatorial optimization problems. A neural network designed to run through a Boltzman method could allow for a significant speedup for training and inference of a deep learning model that relies on probabilistic sampling.

Use cases include binary matrix factorization, ML boosting, reinforcement learning, network design, portfolio modeling.

Welcome.

The early quantum systems won’t crack public-key encryption or solve our universes NP-complete problems, yet. We would require thousands to tens of thousands of qubits to account for error correction. They will, however, introduce commercial quantum computing to the world, kick off a domino effect of demand around enterprise quantum readiness and access, accelerate the integration of more modern computing approaches, and forever change the computing landscape for decades to come.

*Venrock is an investor in Atom Computing.


Setting the Stage for a Commercial Era of Quantum Computers was originally published on Medium.

Source: https://medium.com/@ethanjb

Connecting the next four billion people, Venrock’s investment into Astranis

On January 1, 1983, a group of researchers at ARPA adopted the TCP/IP protocol for their ARPANET, and there began the birth of a network of networks that eventually formed into today’s modern internet. In the following 36 years, over three billion people globally joined the internet.

Today we consume data at a rate 5x faster than our population growth. Mobile data traffic is expected to 7x increase by 2022 (46% CAGR). Data is being demanded faster than we can lay down new fiber, put up new cell towers, install new servers, launch new satellites, and build new data centers.

And over the next 10 years, an additional four billion people are expected to join the internet community.

Traditional approaches

Traditional approaches to connectivity require the build-out of an internet backhaul infrastructure — done traditionally by laying down fiber lines or undersea cables, then connected to cell towers for broadcast. This core infrastructure, backhaul capacity, is bought by telcos to connect cell towers to the internet, and while cell towers are cheap, building the backhaul infrastructure is a significant undertaking, and economically prohibitive without a guaranteed network of revenue-generating users ready to consume.

In the US, fiber costs an estimated $27K/mile to lay down in an urban region, and we’ve already spent over $100 billion in the US laying down fiber lines connected to IP backbones to gateways and cell towers.

In a remote or rural region, i.e. Alaska, it can cost 10x — 100x that. Terrestrial networks have not been a viable option to provide connectivity to them. 45% of the global population live in remote or rural regions.

Due to lands too large, terrain too difficult, climate conditions too severe, or rural and remote populations too dispersed, the traditional solutions that connected the first three billion will not work for the next four billion.

The $100s of billions in fiber lines to connect our IP backbones to gateways and cell towers will be out of reach for the next four billion; across both developed and emerging nations, in often rural or remote regions, starting with limited to no connectivity infrastructure, and with far less economic buying power.

The next four billion

At Venrock, we have always been interested in the infrastructure stack and services that power our everyday lives; commerce, media, communication, connectivity… As legacy systems exceed their scale and performance capabilities attempting to support existing demand, and new market participants join to further stress their capacity to serve, the markets begin to shift to embracing new solutions.

With any service, the ways we served the first three billion people, is unlikely to be the way we serve the next four billion people.

We have always believed that the ways we provided connectivity and bandwidth had to change; the infrastructure was bursting from the seams, we can’t lay down fiber fast enough, the needs of the new market participants were different from what the traditional solutions could offer, the scale and performance requirements just for the existing demand as a whole could no longer be met (5G, 8K video, continued migration of cable to streaming, etc). The connectivity infrastructure cycle was becoming incapable and left an opening for a new solution to emerge.

We evaluated all of the existing capabilities, new technologies, and emerging capabilities to determine what a solution would need to look like:

1. It needed to be accessible by both the emerging & developed markets

  • Provide meaningful bandwidth and latency to access 95% of internet services
  • Could hyper-target markets and regions, and augment as market needs change
  • Could be affordable for the emerging market consumer

2. Would require little to no new ground infrastructure

  • Rely on cheap, commodity user terminals or antennas to connect endpoints
  • Would not require laying down of new ground infrastructure to connect regions together (fiber, copper, etc)
  • Would not require government subsidies or eminent domain to stand up ground endpoints

3. Could augment connectivity and services into existing service networks

  • Would not have to take on the complexity, operations, and capital-intensive task of building a direct-to-consumer service
  • Could be sold into existing channels and networks

4. Could be capitally efficient to get to market

  • Would not require massive upfront capital investments of $100Ms prior to starting service
  • If satellite-based, it would not require constellations of satellites in order to minimally support a single region.
  • Could show a path to positive unit economics in their first market

5. Its technical architecture was proven and could scale up

  • Would not rely on complex, unproven astrophysics
  • Could linearly or exponentially scale up services & capacity
  • Relied on a software-defined system that could improve/optimize over time

With ground fiber out of the running, we looked to large, GEO-based satellites, LEO-based mega constellations, and atmospheric-based balloons & drones as potential solutions.

Geosynchronous Satellites

Large, GEO Satellites have traditionally been the only option to provide backhaul to power new 2G/3G/4G networks and bandwidth, with over $120B per year spent on satellite bandwidth worldwide. These very large (6500kg) and expensive ($300M) satellites are deployed into GEO (geosynchronous orbit, 35,786 km above the earth, where satellites can match the earth’s rotation) to provide continuous coverage to predefined markets.

  • Offering GEO-based satellite communications is a significant undertaking; MNOs (mobile network operators) or ISPs contract with large aerospace manufacturers such as Airbus or Lockheed to deploy to pre-defined markets based on either a guaranteed network of revenue-generating users or broad enough coverage to diversify the risk of turning on new markets.
  • These satellites are what are known as analog “bent pipe” payloads, meaning they simply act as repeaters, taking incoming signals and both shifting the frequencies and boosting their power before transmitting them back to the ground.
  • These conventional analog payloads have rigid bandwidth and fixed coverage that are set during the design and build. Once you deploy the satellite to a region, the coverage area and bandwidth are fixed, regardless of the change in demand or market.
  • While traditional satellites enable connectivity to new regions, there are many markets where the economics of traditional satellites just don’t work. Areas that are remote, rural, or distributed, as most of Africa, Latin America, Asia, and even parts of North America (i.e. Alaska, Northern Canada), simply lack the population density to justify enough satellite coverage and bandwidth to provide any meaningful connectivity or accessible bandwidth, at affordable prices.

Low Earth Mega Constellations

LEO based mega-constellations have re-emerged, with a number of attempts in the works to providemega-constellations (remember Iridium?)consisting of a few hundred to thousands of satellites in low earth orbit. These constellations promise to provide low-cost, high-performance connectivity access, from the likes of SpaceX, Amazon, Samsung, and OneWeb.

On the surface, they sound phenomenal; global networks, independent of nation-control, with always-on connectivity, for the same price we pay for terrestrial broadband today. But pull back the curtain and you find a number of challenges. These systems:

  • Cost billions of dollars to launch and replenish thousands of satellites into LEO (SpaceX estimated their Starlink constellation would cost an estimated $10B — $15B for a 12K satellite constellation and recently announced they want to add an additional 30K satellites)
  • Take years to deploy and will exceed the available launch capacity globally (SpaceX Starlink at current math would take 177 Falcon 9 launches assuming 24 satellites per flight, would take 8 years at current pace).
  • Spend billions more on trying to offer their service direct to consumers, and require consumers to purchase custom user terminals (antennas) to transmit the service. The latest estimates put those phased array antennas at $10K to start per household and eventually making their way down to $2K each.
  • Assume physics of super-low latency laser interlinks between satellites in order to provide continuous coverage (in LEO, a satellite orbits the entire earth every 90–120 minutes) that have yet to be fully proven.
  • Have challenging economics, where a simple calculation of [program cost] / [subscriber revenue — acquisition & setup cost] would price out the majority of the free world.

Atmospheric Balloons & Drones

And lastly, Atmospheric balloons & drones. Both Facebook and Google, amongst others, have experimented with in-atmosphere approaches to beam connectivity into constrained geographies for over 30 years.

  • Google recently spun out the Loon project, a proposed network of stratospheric designed balloons to bring Internet connectivity to rural and remote communities, focused on partnering with mobile network operators to expand their LTE service.
  • Stratospheric balloons, flying at 60,000–80,000ft altitude, only last a few months before deterioration, require sunny weather as they rely on solar for power, and require the use of radio spectrum, meaning they must rely on local regulators to grant access or rely on a unified band.
  • Google has reported struggling with operational and capital costs to support and scale such solutions to their markets in ways those regions can economically sustain.
  • Facebook tried something similar with their Aquila project, a solar-powered drone for use as an atmospheric satellite, intended to act as relay stations for providing internet access to remote areas. In both examples, existing relay stations must be available first to act as repeaters. Facebook most recently shut down the Aquila project as a result of the technical feasibility.

We didn’t see any of these solutions meeting the criteria needed to meet the demands of the next four billion coming online, failing on one or more of the following:

  • Too expensive for the service provider and/or end user
  • Left out remote or rural populations
  • Provided insufficient connectivity to the needs of the region
  • Relied on local governments to provide subsidies and/or land rights
  • Could not scale rapidly once proven
  • Lacked any path to positive unit economics
  • Required unreasonable capital investments before initial service
  • Contained technology risk too high or far unproven

That’s until we met John and Ryan at Astranis.

Coming out of Planet and running the Commercial Spaceflight Federation, John and Ryan deeply understood the gravity the space economy could have on the world infrastructure and how it could revolutionize critical services globally across communication, sensing, intelligence, and connectivity. They recognized that the existing infrastructure was maxed out and new solutions coming to market wouldn’t meet the needs of the emerging internet population.

The most ‘practical’ approaches had huge drawbacks as we mentioned earlier. LEO based solutions are able to micro-size (down to 150kg — 500kg range from 6000kg+) the satellites to be far more efficient to build and launch into orbit, but require constellations of them in order to provide continuous coverage and service (each satellite circles the globe every 90–120 minutes). GEO based solutions are able to provide continuous coverage with just a single spacecraft, but are massive capital expense and build-outs (recall 6500 kg, $300M costs to build and 5 years to deploy), and rely on network operators to take on the capital burdens of the spacecraft and future utilization.

When everyone was zigging to mega-constellations and huge GEO satellite spacecraft, the team at Astranis zagged, rethinking the problem from the ground up. Astranis was born from the idea that you could provide stable and continuous broadband coverage from GEO, with flexible, software-defined, micro-satellites, that were 20x — 30x smaller and cheaper than traditional satellite, to provide satellite connectivity to targeted markets, at equivalent bandwidth and capacity to traditional satellites.

This approach enables them to launch into hyper-targeted markets, such as Alaska, remote regions of Africa, Asia, far Pacifics, and more, doubling or tripling the amount of capacity now offered to its citizens, at a price that is offered at market rate. In emerging markets, that’s a significant fraction of what they pay for available capacity today.

Along the journey, they signed a landmark deal and partnership with Pacific Dataport to provide over 7.5 Gbps of capacity to roughly triple the currently available satellite capacity in Alaska while also bringing costs down by an average of three times less than current pricing for both residential and wholesale customers.

The team, technical achievements, commercial success, and approach to market met all of the dimensions we were looking for.

Their solution provides meaningful bandwidth at broadband speeds, relying on commodity antennas and terminals that require no new ground infrastructure, offering services at far more affordable prices, and targeting specific markets and regions at a significantly lower build and launch costs.

As a result, we had the privilege of leading their Series B, with our good friends joining us from Andreessen Horowitz, 50 Years, Refactor, and other funds and angels in this audacious mission.

Providing internet backhaul is the most fundamental infrastructure to bring connectivity to a region and community. Astranis’ low cost, flexible, and powerful micro-sats have the opportunity to become the new standard for how the developing world accesses and thrives on the internet for the first time.

We couldn’t be more excited to be on the journey with them. Ad Astra!


Connecting the next four billion people, Venrock’s investment into Astranis was originally published on Medium.

Source: https://medium.com/@ethanjb

Ethan Batraski and Racquel Bracken Promoted to Partner

Todd Graham Joins with Focus on Cybersecurity and Infrastructure

For over 40 years, Venrock has been committed to diversified, early stage investing – supporting entrepreneurs who will run through walls to create new products and services, surmounting obstacles that most think impossible. The Venrock team consists of individuals with diverse backgrounds and passions, but all share a collaborative approach to investing and supporting companies with a performance driven, long-term view. Ethan Batraski, who invests in frontier tech and enterprise software, and Racquel Bracken, who invests in biotechnology and creates new companies on behalf of Venrock, reflect the firm’s unique approach. We are thrilled to announce that they have both been promoted to Partner.

Ethan joined Venrock’s technology team in 2017, after 15 years as a product executive, founder, and angel investor, with leadership positions at Facebook, Box, and Yahoo!. Since joining Venrock, Ethan has led our investments across space, autonomy, and frontier technologies, including Skyryse, an autonomy company focused on VTOL aircraft, Atom Computing, a neutral atoms based quantum computing company, as well as three additional companies that have not been publicly announced. He also holds 16 patents. Ethan is a passionate early adopter, space geek, and competitive athlete.

Racquel has been a member of Venrock’s healthcare investment team since 2016. She led the Series B round for Cyteir, an oncology start-up focused on DNA damage repair, as well as the formation and seed funding of Federation Bio, a microbial cell therapy company for intractable disease. Since its founding, she has also served as the CEO of Federation Bio in addition to her Venrock investment activities. Prior to Venrock, Racquel was one of the first employees at Clovis Oncology and helped the company from inception through approval of its first product, with responsibilities in business development and commercialization over 7 years. Racquel is an avid outdoorswoman, backcountry skier, mountain biker, and board game lover. 

We are also happy to welcome Todd Graham to the technology team as a Vice President in Palo Alto. Building on Venrock’s history in the security and infrastructure space – including CloudFlare, Shape Security and CheckPoint Software –Todd is excited about the future of digital transformation, human-based cyber-threats, disruptive go-to-market, and the consumerization of the enterprise experience. Todd joins Venrock from Cisco, where he led corporate strategy for the security and collaboration businesses. Todd was also an early employee at Tablus, a Data Loss Prevention company that was acquired by EMC’s security division, RSA.  

We are honored to recognize Racquel, Ethan and Todd’s accomplishments and look forward to their continued partnership and future successes, joining forces with early stage leaders to build substantial, durable businesses that improve lives across the globe. 

To Measure Sales Efficiency, SaaS Startups Should Use the 4×2

Once you’ve found product/market fit, scaling a SaaS business is all about honing your go-to-market efficiency. Many extremely helpful metrics and analytics have been developed to provide instrumentation for this journey. LTV (lifetime value of a customer), CAC (customer acquisition cost), Magic Number, and SaaS Quick Ratio, are all very valuable tools. The challenge in using derived metrics such as these, however, is that there are often many assumptions, simplifications, and sampling choices that need to go into these calculations, thus leaving the door open to skewed results.

For example, when your company has only been selling for a year or two, it is extremely hard to know your true Lifetime Value of a Customer. For starters, how do you know the right length of a “lifetime”? Taking 1 divided by your annual dollar churn rate is quite imperfect, especially if all or most of your customers have not yet reached their first renewal decision. How much account expansion is reasonable to assume if you only have limited evidence? LTV is most helpful if based on gross margin, not revenue, but gross margins are often skewed initially. When there are only a few customers to service, COGS can appear artificially low because the true costs to serve have not yet been tracked as distinct cost centers as most of your team members wear multiple hats and pitch in ad hoc. Likewise, metrics derived from Sales and Marketing costs, such as CAC and Magic Number, can also require many subjective assumptions. When it’s just founders selling, how much of their time and overhead do you put into sales costs? Did you include all sales related travel, event marketing, and PR costs? I can’t tell you the number of times entrepreneurs have touted having a near zero CAC when they are just starting out and have only handfuls of customers–which were mostly sold by the founder or are “friendly” relationships. Even if you think you have nearly zero CAC today, you should expect dramatically rising sales costs once professional sellers, marketers, managers, and programs are put in place as you scale.

One alternative to using derived metrics is to examine raw data, less prone to assumptions and subjectivity. The problem is how to do this efficiently and without losing the forest for the trees. The best tool I have encountered for measuring sales efficiency is called the 4×2 (that’s “four by two”) which I credit to Steve Walske, one of the master strategists of software sales, and the former CEO of PTC, a company renowned for their sales effectiveness and sales culture. [Here’s a podcast I did with Steve on How to Build a Sales Team.]

The 4×2 is a color coded chart where each row is an individual seller on your team and the columns are their quarterly performance shown as dollars sold. [See a 4×2 chart example below]. Sales are usually measured as net new ARR, which includes new accounts and existing account expansions net of contraction, but you can also use new TCV (total contract value), depending on which number your team most focuses. In addition to sales dollars, the percentage of quarterly quota attainment is shown. The name 4×2 comes from the time frame shown: trailing four quarters, the current quarter, and next quarter. Color coding the cells turns this tool from a dense table of numbers into a powerful data visualization. Thresholds for the heatmap can be determined according to your own needs and culture. For example, green can be 80% of quota attainment or above, yellow can be 60% to 79% of quota, and red can be anything below 60%. Examining individual seller performance in every board meeting or deck is a terrific way to quickly answer many important questions, especially early on as you try to figure out your true position on the Sales Learning Curve. Publishing such “leaderboards” for your Board to see also tends to motivate your sales people, who are usually highly competitive and appreciate public recognition for a job well done, and likewise loathe to fall short of their targets in a public setting.

Sample 4×2 Chart

Some questions the 4×2 can answer:

Overall Performance and Quota Targets: How are you doing against your sales plan? Lots of red is obviously bad, while lots of green is good. But all green may mean that quotas are being set too low. Raising quotas even by a small increment for each seller quickly compounds to yield big difference as you scale, so having evidence to help you adjust your targets can be powerful. A reasonable assumption would be annual quota for a given rep set at 4 to 5 times their on-target earnings potential.

Trendlines and Seasonality: What has been the performance trendline? Are results generally improving? Was the quarter that you badly missed an isolated instance or an ominous trend? Is there seasonality in your business that you have not properly accounted for in planning? Are a few elephant sized deals creating a roller-coaster of hitting then missing your quarterly targets?

Hiring and Promoting Sales Reps: What is the true ramp time for a new rep? You’ll see in the sample 4×2 chart that it is advised to show start dates for each rep and indicate any initial quarter(s) where they are not expected to produce at full capacity. But should their first productive quarter be their second quarter or third? Should their goal for the first productive quarter(s) be 25% of fully ramped quota or 50%? Which reps are the best role models for the others? What productivity will you need to backfill if you promote your star seller into a managerial role?

Evaluating Sales Reps: Are you hitting your numbers because a few reps are crushing it but most are missing their numbers significantly? Early on, one rep coming in at 400% can cover a lot of sins for the rest of the team. Has one of your productive sellers fallen off the wagon or does it look like they just had a bad quarter and expect to recover soon? Is that rep you hired 9 months ago not working out or just coming up to speed?

Teams and Geographies: It is generally useful to add a column indicating the region for each rep’s territory and/or their manager. Are certain regions more consistent than others? Are there managers that are better at onboarding their new sellers than others?

Capacity Planning: Having a tool that shows you actual rep productivity and ramp times gives you the evidence-based information you need to do proper capacity planning. For example, if you hope to double sales next year from $5M ARR to $10MM, how many reps do you need to add–and when–based on time to hire, time to ramp, and the percentage of reps likely to hit their targets? Too few companies do detailed bottoms up planning at this level of granularity, and as a result fail to hit their sales plans because they simply have too few productive reps in place early enough.

There are many other questions the 4×2 can shed light on. I find it especially helpful during product/market fit phase and initial scaling. We tend to spend a few minutes every board meeting on this chart, with the VP Sales providing voice over, but not reciting every line item. As your team gets larger and the 4×2 no longer fits on one slide, it might be time to place the 4×2 in the appendix of your Board deck for reference.

One question often asked about implementing the 4×2 is what numbers to use for the current and next quarter forecasts? If your sales cycles are long, such as 6 months, then weighted pipeline or “commits” are good proxies for your forward forecasts. In GTM models with very short cycles, it will require more subjective judgement to forecast a quarter forward, so I recommend adding a symbol such as ↑, ↔, or ↓ indicating whether the current and forward quarter forecast is higher, lower, or the same as the last time you updated the 4×2. Over time you’ll begin to see whether your initial forecasts tend to bias too aggressively or too conservatively, which is a useful thing to discover and incorporate into future planning.

There are lots of ways to tweak this tool to make it more useful to you and your team. But what I like most about it is that unlike many derived SaaS metrics, the 4×2 is based on very simple numbers and thus “the figures don’t lie.”


To Measure Sales Efficiency, SaaS Startups Should Use the 4×2 was originally published on TechCrunch‘s Extra Crunch.

Source: https://techcrunch.com/extracrunch/

Why I Fell for Nightfall AI

Four years ago I met a recent Stanford grad named Isaac Madan. He had an impressive computer science background, had founded a startup in college, and was interested in venture capital. Though we usually look to hire folks with a bit more experience under their belt, Isaac was exceptionally bright and had strong references from people we trusted. Isaac joined Venrock and for the next two years immersed himself in all corners of technology, mostly gravitating towards enterprise software companies that were utilizing Artificial Intelligence and Machine Learning. Isaac packed his schedule morning, noon, and night meeting with entrepreneurs, developing a deep understanding of technologies, go-to-market strategies, and what makes great teams tick. Isaac was a careful listener, and when he spoke, his comments were always insightful, unique, and precise. Within a year, he sounded like he had been operating in enterprise software for over a decade.

After two years in venture, Isaac got the itch to found another startup. He paired up with a childhood friend, Rohan Sathe, who had been working at Uber. Rohan was the founding engineer of UberEats, which as we all know, grew exceptionally fast, and today generates over $8Bn in revenue. Rohan was responsible for the back-end systems, and saw firsthand how data was sprayed across hundreds of SaaS and data infrastructure systems. Rohan had observed that the combination of massive scale and rapid business change created significant challenges in managing and protecting sensitive data. As soon as they teamed up, Isaac and Rohan went on a “listening tour,” meeting with enterprise IT buyers asking about their business priorities and unsolved problems to see if Rohan’s observations held true in other enterprises. Isaac and I checked in regularly, and he proved to be an extraordinary networker, leveraging his contacts, his resume, and the tenacity to cold call, conducting well over 100 discovery interviews. Through these sessions, it was clear that Isaac and Rohan were onto something. They quietly raised a seed round last year from Pear, Bain Capital Ventures, and Venrock, and started building.

Nightfall founders 11-04-2019 v2[2][1].png
Founders of Nightfall AI: Rohan Sathe (left) and Isaac Madan (right)

One of the broad themes that Isaac and I worked on together while at Venrock was looking for ways in which AI & ML could re-invent existing categories of software and/or solve previously unresolvable problems. Nightfall AI does both. 

On the one hand, Nightfall (formerly known as Watchtower) is the next generation of DLP (Data Loss Prevention), which helps enterprises to detect and prevent data breaches, such as from insider threats–either intentional or inadvertent. DLPs can stop data exfiltration and help identify sensitive data that shows up in systems it should not. Vontu was one of the pioneers of this category, and happened to be a Venrock investment in 2002. The company was ultimately acquired by Symantec in 2007 at the time that our Nightfall co-investor from Bain, Enrique Salem, was the CEO of Symantec. The DLP category became must-have and enjoyed strong market adoption, but deploying first-generation DLP required extensive configuration and tuning of rules to determine what sensitive data to look for and what to do with it. Changes to DLP rules required much effort and constant maintenance, and false positives created significant operational overhead. 

Enter Nightfall AI. Using advanced machine learning, Nightfall can automatically classify dozens of different types of sensitive data, such as Personally Identifiable Information (PII), without static rules or definitions. Nightfall’s false positive rate is exceptionally low, and their catch rate extremely high. The other thing about legacy DLPs is that they were conceived at a time when the vast majority of enterprise data was still in on-premise systems. Today, however, the SaaS revolution has meant that most modern businesses have a high percentage of their data on cloud platforms. Add to this the fact that the number of business applications and end-users has grown exponentially, and you have an environment where sensitive data shows up in a myriad of cloud environments, some of them expected, like your CRM, and some of them unexpected and inadvertent, like PII or patient health data showing up in Slack, log files, or long forgotten APIs. This is the unsolved problem that drew Isaac and Rohan to start Nightfall. 

More than just a next-gen DLP, Nightfall is building the control plane for cloud data. By automatically discovering, classifying, and protecting sensitive data across cloud apps and data infrastructure, Nightfall not only secures data, but helps ensure regulatory compliance, data governance, safer cloud sharing and collaboration, and more. We believe the team’s impressive early traction, paired with their clarity of vision, will not only upend a stale legacy category in security but also usher in an entirely new way of thinking about data security and management in the cloud.This open ended opportunity is what really hooked Venrock on Nightfall.

Over the past year, Nightfall has scaled rapidly to a broad set of customers, ranging from hyper-growth tech startups to multiple Fortune 100 enterprises, across consumer-facing and highly-regulated industries like healthcare, insurance, and education. In our calls with customers we consistently heard that Nightfall’s product is super fast and easy to deploy, highly accurate, and uniquely easy to manage. Venrock is pleased to be co-leading Nightfall’s Series A with Bain and our friends at Pear. After 21 years in venture, the thing that I still enjoy most is working closely with entrepreneurs to solve hard problems. It is all the more meaningful when I can work with a high potential young founder, from essentially the beginning of their career, and see them develop into an experienced entrepreneur and leader. I am thrilled to be working with Isaac for a second time, and grateful to be part of Nightfall’s journey.

Source: http://vcwaves.wordpress.com

10,000 Steps on a Knife Edge: Building a Large, Enduring Business

Over the last seven years, a bunch of variables – some influenced by our effort and attention, while others definitely out of our control – fell into place much more positively than usual. The result being that Cloudflare and 10X Genomics – companies in which Venrock led Series A financings – both went public this week. These are terrific, emergent businesses creating material, differentiated value in their ecosystems… with many more miles to go. Their founder leaders – Ben Hindson and Serge Saxonov at 10X, Michelle Zatlyn and Matthew Prince at Cloudflare – are inspiring learners with extreme focus on driving themselves and their organizations to durably critical positions in their industries. They, and their teams, deserve kudos for their accomplishments thus far, especially given they are sure to remain more focused on the “miles to go” than the kudos.

The VC humble brag is a particularly unattractive motif, so let’s be clear, these returns for Venrock will be great. Our initial investments, while probably at appropriate risk adjusted prices then, now look exceedingly cheap. In addition, each company had operational success in a forgiving capital environment, so our ownership positions were not overly diluted. Whenever we do crystallize these positions, each will return substantially more than the fund’s committed capital. In addition, we hope that Cloudflare and 10X’s success will help us connect, and partner, with one or two additional tenaciously driven company builders. 

But none of this is the reason for actually putting pen to paper today. The important nugget, too often buried amidst companies’ successes, is reflected in the opening sentence of this piece. Each of these businesses are mid-journey on the path of 10,000 steps on a knife edge.

To the pundits and prognosticators, startups begin with a person and an idea (maybe a garage) that leads to a breakthrough, after which money falls out of the air. This narrative ignores the ongoing barrage of strategic and executional hurdles, and also the asymmetry of consequences. One wrong move or bad break can erase the gains resulting from many right calls – this is life on the knife edge. This phenomenon of disproportionately large negative repercussions has corollaries in the realm of integrity and respect – it is difficult to gain, but easy to lose. One step off the knife edge is a problem. 

Successful start-ups are the result of teams making thousands of – much more often than not – good choices, but as importantly, rapidly fixing the bad choices. This is hard, lonely, and unforgiving work that isn’t for most people, especially at the formative stage. These pioneers invest themselves completely to forever change their industries.

So while – for the first time – two Venrock portfolio companies are ringing the bell at different stock exchanges on the same day, knowing these teams, I am certain that they will quickly return to their journey of asymmetric risk/reward because it is their nature. These IPO’s are one step along that knife edge, in this case to gather capital, provide liquidity and allow for maturation of the shareholder base. Congratulations to the employees of Cloudflare and 10x during this moment of success – thank you.

Force of Nature

I first met Brian O’Kelley in the summer of 2008, when he was looking to raise his first institutional round of funding for AppNexus. Brian had a vision for how the online advertising market was going to evolve and believed that building highly scaled services for both sides of the industry – publishers that want to sell ads and marketers who want to buy them – would enable a more efficient and cost-effective marketplace. Software, data, and machine learning ultimately enabled ad transactions to occur programmatically and in real time on the AppNexus platform.

We at Venrock believed in Brian’s vision, but that was not the main reason we invested. We look first and foremost for special entrepreneurs, like former Venrock entrepreneurs Gordon Moore at Intel, Steve Jobs at Apple, Gil Shwed at Check Point, Steve Papa at Endeca, and more recently Michael Dubin at Dollar Shave Club. We call these folks “Forces of Nature”.

While Venrock does not have a formal or complete definition of “Force of Nature”, they do all share the following characteristics: they are scary intelligent, have far reaching vision for a market, think very big, and are EXTREMELY competitive.

Brian had all of these characteristics in spades!

I prepared an early investment description for my partners at Venrock called an NDFM (New Deal First Mention). This memo is designed to briefly describe the business for the broader partnership team to contemplate while an investing partner begins the diligence process.


Click here to read our original investment memo!

I was very interested in the business opportunity, as Venrock had invested in plenty of adtech businesses including DoubleClick many years before, but I was mostly excited about working with Brian. I really believed he was a “Force of Nature”.

One thing about “Forces of Nature” is that they stand out and pretty much everyone who knows them has an opinion on them. The opinions on Brian were consistent that he is a visionary and very smart, but were all over the map in terms of his ability to build and lead a company. The more time I spent with him, the more he grew on me, and I believed he had what it takes.

We then led the first institutional round at AppNexus investing $5.7MM for more than 20% ownership. When our investment was announced, I had two other VCs from other firms reach out to me and say I made a mistake backing Brian. This had never happened to me before, but in some ways I liked hearing it because non-consensus deals tend to have more upside (and downside) than straight down the middle deals. When I shared this feedback (but not the names) with Brian, his reaction was predictable for a “Force of Nature” … he laughed, said they are idiots and we are going to build a killer company to prove it.

While sitting on the AppNexus board, I challenged Brian that I would ultimately measure his performance not on the outcome of the company, but on the quality of the team he recruits, develops, and leads.

Well, Brian has clearly graded out as an A+ CEO as he has built an absolutely, positively world-class management team in NYC. They helped AppNexus fulfill Brian’s original vision by enabling hundreds of publishers and thousands of marketers to serve billions of ads, which led to the acquisition of the company and about 1,000 AppNexians by AT&T, announced earlier today.

I would like to thank Brian and that fantastic, all world, all universe team including Michael Rubenstein, Jon Hsu, Ryan Christensen, Ben John, Catherine Williams, Tim Smith, Nithya Das, Pat McCarthy, Kris Heinrichs, Josh Zeitz, Craig Miller, Tom Shields, Julie Kapsch, Dave Osborn, Doina Harris and many others for allowing Venrock the opportunity to be part of a very exciting journey and a terrific outcome.

And we now look forward to meeting that next “Force of Nature”!

DJ Patil Joining Venrock

Venrock has a 40-year history of investing across technology and healthcare, including more than a decade at the intersection of those two sectors. Earlier this month we expanded our technology investing team with the addition of Tom Willerer.  Continuing to increase the depth and breadth of Venrock’s value-add to entrepreneurs, we are honored that DJ Patil is joining Venrock as an Advisor to the Firm.

According to DJ, “Venrock has a long and incredible history of helping entrepreneurs create new categories – Apple; Cloudflare; Dollar Shave Club; Gilead; Illumina; Nest; Athenahealth. Given their experience across healthcare and technology, they are well-situated to help build an entirely new generation of data science/AI, healthcare, security, as well as consumer and enterprise internet companies. I have known this team for years and am eager to help their effort going forward.”

Best known for coining the term “data science”, DJ helped establish LinkedIn as a data-driven organization while serving as the head of data products, Chief Scientist and Chief Security Officer from 2008 – 2011.  DJ spent the last several years as the Chief Data Scientist of the United States, working in the White House under President Obama. The first person to hold this role, DJ helped launch the White House’s Police Data Initiative, Data-Driven Justice, and co-led the Precision Medicine Initiative.  Immediately prior to the White House, he led the product team at RelateIQ prior to its acquisition by Salesforce.

DJ will be working with the Venrock investing team and advising Venrock portfolio companies on healthcare, security, data and consumer internet challenges and opportunities. His decades working in the tech industry, combined with his expertise in government, will be a great asset to the Venrock ecosystem.