Category Archives: Technology – Insights

Building Commercial Open Source Software: Part 2— Roadmap & Developer Adoption

Building a Commercial Open Source Company

In our time investing and supporting open source companies, we’ve learned several lessons about project & community growth, balancing roadmap and priorities, go-to-market strategy, considering various deployment models, and more.

In the spirit of open source, we wanted to share these learnings, organized into a series of posts that we’ll be publishing every week — enjoy!

1. Solve for the homegrown gap

When developers struggle with deploying an open-source project into their complex internal environments or infrastructures, they build homegrown solutions instead. Solving for key areas turns developer engagement into commercial customers. This means it needs to be as easy and seamless as possible to set up and deploy in order to start demonstrating value. Whether it’s providing Kubernetes operators, specific integrations, CLIs, or UIs, make it dead easy to deploy.

2. Offer an enterprise-ready package

Open source is designed for the community, by the community, and by definition wasn’t designed to work out of the box in the enterprise. Comprehensive testing, a certification program, performance guarantees, consistency & reliability, cloud-native, and key integrations are all substantial value propositions on top of an open core.

3. Layer value on top of the open core

Focus on ways to magnify the value of the open core within the customer’s organization; make it easier to deploy, operate, manage, and scale. Adding capabilities such as rich UIs, analytics, security & policies, management planes, logging/observability, integrations, and more make it easier to work within increasingly complex customer environments. For example, Elastic built a number of products on top of their core that made it easier to deploy and manage such as Shield (security), Marvel (monitoring), Watcher (alerting), native graph, and ML packages.

4. Narrow focus until you become ‘the’ company:

Focus narrowly on making it easy and obvious for every company in the world to be on your open core, and use it to grow both the community and customer love to become the [open source project] company. Avoid splitting focus until you’ve generated enough market adoption (i.e. $25M in ARR) to declare yourself the winner. Databricks is the Spark company, Astronomer is the Airflow company, Confluent is the Kafka company, by focusing on developing, growing, and scaling the open core.

5. Go horizontal over vertical

Focus on modular, horizontal capabilities that apply to all engineering organizations, of all sizes, that make the open core and enterprise solution more robust, manageable, performant, and scalable. Horizontal would include such capabilities mentioned earlier such as analytics, logging/observability, management tools, and automation, but also enabling new capabilities that amplify the value of the core. This might include improved capabilities for data ingress/egress, replacing existing infrastructure components for tighter integration, or moving up/down the stack. Vertical capabilities are focused on specific customer segments or markets, such as offering a ‘financial services package’ or specific offerings designed for large enterprises. The illustration of this has been most recently evident in diverging strategies of Puppet vs Chef and led to Chef’s low revenue multiple acquisition.

6. Optimize for developer usage over revenue

In the early commercial days, usage counts more than revenue. You are looking at downloads, stars/fork ratio, contributor velocity on the open source project, and beginning to see reference customers adopting your project on the commercial side. Developer engagement is key to building customer love, deep adoption, and lock-in. These lead to eventual expansions, referrals, customer champions, and all the goodness.

7. Without telemetry, you’re flying blind

Without understanding how the project is being used, the number of deployments/developers/organizations, service utilization, and adoption curves, it’s difficult to prioritize fixes or features. To better observe, manage, and debug before they become major issues, implementing lightweight telemetry can offer continuous, unfiltered insight into the developer’s experience. Two key projects, OpenCensus and OpenTracing have merged to form OpenTelemetry, enabling metrics, and distributed traces that can easily be integrated into your project.


Building Commercial Open Source Software: Part 2— Roadmap & Developer Adoption was originally published on Medium

Source: https://ethanjb.medium.com/

Building Commercial Open Source Software: Part 1 — Community & Market Fit

Since the early 90s, with the emergence of the MIT free software movement and popularity of Linux, there has been an accelerating shift away from proprietary, closed software to open source.

Today the open source ecosystem has over 40M registered users, 2.9M organizations, and 44M projects on Github alone. Just in 2019, 10M new developers joined the GitHub community, contributing to 44M+ repos across the world.

Image for post

Open source continues to be the heartbeat of the software community and one of the largest and growing segments of the market by IPO, M&A, and market cap; with new projects emerging well beyond low-level systems to machine learning, data infrastructure, messaging, orchestration, and more.

Companies such as Hashicorp, Astronomer*, Puppet, Confluent, and Databricks are a new approach to commercial open source, with a focus on deploying their open cores to the broad developer community and the largest companies in the world with enterprise-ready needs attached to them. All while actively contributing to the community and gradually opening up more of the closed platform to the community — and building big, meaningful businesses along the way.

These new approaches are building platforms that wrap open core packages with support, enterprise-focused capabilities, and enterprise-level stability to transform a solution into a high availability, horizontally scalable, modular service that can fit into any set of cloud or infrastructure needs. Riding a tidal wave of community growth and demand as the underlying projects proliferate across developers and enterprises.

While there is no one-size-fits-all approach, each of these companies have navigated a complex maze of decisions as they built and scaled their solutions: deciding when building a commercial solution made sense, ensuring the community still stayed in primary focus, remaining open yet balancing the needs of the enterprise, deciding when to focus on bottoms-up or introduce enterprise-wide selling, and how to remain competitive against the cloud providers.

Building a Commercial Open Source Company

In our time investing and supporting open source companies, we’ve learned several lessons about project & community growth, balancing roadmap and priorities, go-to-market strategy, considering various deployment models, and more.

So in the spirit of open source, we wanted to share these learnings, organized into a series of posts that we’ll be publishing every week — enjoy!

  • Part 1: Building blocks to a commercial open source offering
  • Part 2: Focus your commercial and OSS roadmaps on developer adoption
  • Part 3: Sequence your distribution & GTM strategy in layers
  • Part 4: Your deployment model is your business model
Image for post

1. Find Community-Project Fit

Ultimately the success of a commercial open source company first relies on an active community of contributors and developers supporting the project and distributing it to more and more developers. This means building a project that the community has decided is worthy of their participation and support is the most important goal starting out. The keys to building a vibrant community around your project center around earning developer love by solving an important and widely felt problem, inspiring and supporting others to actively contribute, and giving reason for a passionate community to begin to form around it; whether it’s around integrations, an ecosystem built on top of it, or new ways to extend the project. The key questions we ask ourselves when evaluating a new project are: Is the project seeing increases in contributors, commits, pull requests, and stars? Has the community rallied beyond this project amongst the many variants or flavors attempting to solve a similar problem? Is this project introducing a compelling new approach to solving a thorny and wide- felt problem? Have the project creators put forward an assertive and opinionated view on the direction of the project? Have they also been inclusive in ensuring the community has a voice and say? How would developers feel if the project was no longer available?

2. Build Around Project-Market Fit

Next is understanding how your project is being used inside of companies: how are developers using your project? What are the killer use cases? Where are they struggling to use the project? From there, you can decide whether building an enterprise offering around the project makes sense. For instance, is it a run-time service that companies struggle to host where a managed solution would see strong adoption? Are developers building homegrown solutions to make it work or stitch it together internally? Might customers need enterprise-level security, support, or performance in order to deploy into a production environment? Could the value of an enterprise solution wrapped around the open core eventually multiply if coupled with capabilities such as logging, analytics, monitoring, high availability, horizontal scaling, connectors, security, etc. Understanding how the project is being used and where there might be value-add for enterprise customers is key before embarking on building an enterprise service.

3. Start With a Loose Open Core

The goal in going open source to enterprise is to see widespread distribution and adoption of a project by a large community of developers which can eventually turn into a healthy cocoon of demand for an enterprise offering. To do so, it’s best to avoid dogmatic decisions in the early stages of going pure open or what/how will be closed. Rather, focus on keeping a loose open core; keeping the core open for the life of the project and building an enterprise offering as source-available and closed source capabilities that magnify the value of the core when being deployed into complex environments or use cases. Over time you can decide to graduate source available and closed source capabilities into the open core — more about that in an upcoming post. Keeping a loose open core allows the flexibility to continue to build and grow the community while offering specific SaaS or deployment models that meet the needs of the commercial market, and hopefully keep both constituencies satisfied.

4. Pick the Right License

Your project license structure is key to get right from the start; a permissive software license (MIT, BSD, Apache) allows for mass adoption by unburdening users from needing to contribute back. Protective licenses (GNU GPL) force forks/derivatives to release their contributions back as open source. Then there are variants such as the LGPLAGPL, and Apache 2.0 with Commons Clause that are mostly permissive but have specific limits if you’re concerned about cloud providers or others freeloading on your project into a managed service. Thinking through the competitive risks, such as what groups forking your project might be able to do, or if the cloud providers could fork a managed service of your project, are critical to designing the right license structure. See, in example, the Redis Labs post on changing from AGPL to Apache 2.0 with Commons Clause and why.

5. Define Clear Principles for Open vs Closed

Constructing the right open core vs source available vs closed source strategy could be a company-making or killing decision. Source available and closed core need to be thought of as value-adds that wrap around open core, with many cases, in a path to eventually graduating into open core. Be explicit about the principles you use to decide what to open vs close, and how/when/if they graduate. A guiding principle for what to make part of open core vs closed might be (a) the closed enterprise/commercial edition only focuses on the needs of enterprise segment, or (b) the needs of companies that are post-revenue, or © based on use cases that exceed certain scale/performance requirements. Be explicit about it, write it down, share it with your community. The selected guiding principle then dictates when to release to the open core vs keeping closed The community often will understand that a strong commercial business is required for continued investment into the project, as long as you are explicit about the intentions and roadmap to continue supporting the community. These transparent principles will often avoid many of the conflicts between the community and commercial needs, i.e. the community pushing for a feature that competes with the enterprise offering.

6. Maintain Project Leadership

Even as the project creators, maintaining project leadership is key and is not guaranteed. This means striking the right balance between supporting and leading the community, and being explicit with the direction of the project, yet engaging deeply with the community. Taking an active role in the PMC if part of the Apache community, lead the AIPs, know the top 50 contributors intimately, and drive the direction of the project. Be responsible for the majority of the new commits and releases for the project. Ensure there is always a reason for new contributors to join and for the community to continue growing.

Next week, we’ll talk about focusing your commercial and OSS roadmaps on developer adoption.

*Venrock is an investor in Astronomer, the enterprise developers of Apache Airflow.


Building Commercial Open Source Software: Part 1 — Community & Market Fit was originally published on Medium.

Source: https://medium.com/@ethanjb

My First Week Investing at Venrock

Dear Founder,

Today begins my first week at Venrock. I’m excited and very humbled to join this long-standing team of whip-smart, hardworking investors in supporting you, the entrepreneur. Drawing on my background, I’ll be investing in consumer, commerce enablement, and SMB services & tools. I imagine my focus may evolve over time. However, there’s one thing that will remain consistent – my commitment in service to you, the founder.

As the new kid on the investing block, I wanted to share three snapshots that give you some color on me. In 2005, Steve Jobs gave my commencement speech at Stanford. One phrase in particular lingered with me over the years: “You can’t connect the dots looking forward; you can only connect them looking backwards.” So here are three snippets into what’s shaped my perspective of our relationship.

🤝 The Goldman Sachs Years – The Client Always Comes First. That Means You.

Before my transition into tech, I spent eight years at Goldman. It was my first job out of college, and I stayed because I loved my clients and I loved New York City. I covered retail and consumer brands, supporting founders and CEOs through financings, acquisitions, and IPOs.

My first day at Goldman, I was given a laminated sheet of the firm principles. It was the Goldman version of the ten commandments. I forgot most of them, except the first which was in big, bold letters. Principle #1: The client always comes first.

Banking is a services business. Your role is to support your client. And here’s the thing, venture capital is no different. As the founder, you’re my client. It’s my job to earn your trust. This means consistently showing up, listening more than speaking, and supporting you as your company evolves. And if that means flying to Chicago on my birthday in the midst of a midwestern winter, I will do it!

🎢 The Pinterest Years – Startups Are a Roller Coaster. Lean on Each Other.

In 2012, I got the sense that something was happening out west. And I knew where I wanted to be: Pinterest. As someone who loved getting lost on the streets of New York, I was looking for the same serenditipous experience online. The moment I saw Pinterest, I knew this was it.

It took over a year of pitching (aka begging) before I was hired to help build their monetization engine from the ground up. Going from banking, where consistency is the name of the game, to startups, where each day unfolded differently was equal parts exhilarating and scary as hell.

We all know that startups are an emotional roller coaster ride. I can only imagine how amplified this is for the founder. What got me through the lows were two things – my merry band of coworkers and our unequivocal faith in Pinterest’s success. And that’s the same perspective I bring to my relationship with you. The lows feel more overcomable when you’re surrounded by those who believe in you.

👤 The Facebook Years – The Next Zuck Looks Like You.

In my roles at Pinterest and Goldman Sachs, I mainly worked with larger companies. So when I heard that Facebook was increasing its focus on SMBs, I jumped at the chance to lead its long tail advertising business. I met small business owners from around the world. And it opened my eyes in a big way to the diversity and ingenuity of entrepreneurs from all walks of life.

There is no “central casting” for the small business owner. This allows for a greater expression for what a successful business owner looks and thinks like. There’s the 24-year-old woman in rural Indonesia who runs a multi-million dollar batik business. There’s the 55-year-old army veteran in Louisiana who’s cornered his local towing market. I saw this diversity of backgrounds in a visceral way in working with SMBs. Quite honestly, I wouldn’t have grasped it otherwise.

The Internet has expanded access in a way that didn’t exist before. This means world-changing businesses can emerge from any number of communities. I strongly believe that the next Facebook will be built by someone who looks and thinks like you. And it’s my job to find and partner with you.

🙋🏻‍♀️ I Look Forward to Meeting You.

It takes a special person to have the tenacity and optimism to build a world-changing company – to bring the art of the possible to the realm of reality. In times like these, we need you more than ever. I look forward to meeting you, supporting you, and riding the highs and lows of the startup roller coaster with you in the weeks, months, and years to come.


My First Week Investing at Venrock was originally published on Notion.

Source: https://www.notion.so/My-First-Week-Investing-at-Venrock-de652714845340c2874793e16bbbc890

Venrock Adds Two New Investors: Mariana Mihalusova and Julie Park

Much has changed during the past six months, but our search for great talent hasn’t stopped. We are excited to welcome two Vice Presidents to the firm, continuing our effort to help build great companies across healthcare and technology. 

Mariana joins the healthcare team with experience across the entire drug development life cycle. Prior to Venrock, she was Executive Director at Celgene, where she led a broad range of preclinical and clinical stage drug programs through early human studies.  She graduated from Harvard with an MBA and Ph.D. in biochemistry after earning her bachelor’s at Brown University. Her focus will be on early stage biotech companies and she was instrumental in Venrock’s recent investment in a stealth oncology antibody drug conjugate company. 

Julie joins our technology team and will focus on investments in consumer, commerce enablement, and SMB tools & services. Most recently, she was an executive at Facebook, where she helped SMBs grow as Director of the global long tail ads business. Previously, she was on the founding product and sales teams at Pinterest. Before moving to the west coast, Julie was a Vice President at Goldman Sachs, where she worked closely with consumer and retail companies. Julie has dual degrees from Stanford, with a Master of Science from the School of Engineering.  

Both Mariana and Julie will be based in our Palo Alto office upon reopening.

Eight Trends Accelerating The Age Of Commercial-Ready Quantum Computing

Every major technology breakthrough of our era has gone through a similar cycle in pursuit of turning fiction to reality.

It starts in the stages of scientific discovery, a pursuit of principle against a theory, a recursive process of hypothesis-experiment. Success of the proof of principle stage graduates to becoming a tractable engineering problem, where the path to getting to a systemized, reproducible, predictable system is generally known and de-risked. Lastly, once successfully engineered to the performance requirements, focus shifts to repeatable manufacturing and scale, simplifying designs for production.

Since theorized by Richard Feynman and Yuri Manin, quantum computing has been thought to be in a perpetual state of scientific discovery. Occasionally reaching proof of principle on a particular architecture or approach, but never able to overcome the engineering challenges to move forward.

That’s until now. In the last 12 months, we have seen several meaningful breakthroughs from academia, venture-backed companies, and industry that looks to have broken through the remaining challenges along the scientific discovery curve. Moving quantum computing from science fiction that has always been “five to seven years away,” to a tractable engineering problem, ready to solve meaningful problems in the real world.

Companies such as Atom Computing* leveraging neutral atoms for wireless qubit control, Honeywell’s trapped ions approach, and Google’s superconducting metals, have demonstrated first-ever results, setting the stage for the first commercial generation of working quantum computers.

While early and noisy, these systems, even at just 40-80 error-corrected qubit range, may be able to deliver capabilities that surpass those of classical computers. Accelerating our ability to perform better in areas such as thermodynamic predictions, chemical reactions, resource optimizations and financial predictions.

As a number of key technology and ecosystem breakthroughs begin to converge, the next 12-18 months will be nothing short of a watershed moment for quantum computing.

Here are eight emerging trends and predictions that will accelerate quantum computing readiness for the commercial market in 2021 and beyond:

1. Dark horses of QC emerge: 2020 will be the year of dark horses in the QC race. These new entrants will demonstrate dominant architectures with 100-200 individually controlled and maintained qubits, at 99.9% fidelities, with millisecond to seconds coherence times that represent 2x -3x improved qubit power, fidelity and coherence times. These dark horses, many venture-backed, will finally prove that resources and capital are not sole catalysts for a technological breakthrough in quantum computing.

2. Hybrid classical-quantum applications will power the first wave of commercial applications. Using quantum systems, we can natively simulate quantum theory or elements of nature, such as the characteristics of electrons, and thus molecules and their behaviors. Hybrid systems can rely on early quantum systems surpassing what is possible on a classical computer: Taking advantage of their limited and specialized capabilities while passing the computed variables back to the classic system for completed computation. We’ve already seen this emerge for chemistry-related research across materials engineering and pharma.

3. Early consolidation: We will start to see early consolidation in quantum hardware companies as conglomerates realize they need to abandon, bolster and/or diversify their current architectural approaches. Companies that don’t have existing investments in quantum will need to acquire their way in to gain access. A number of architectural methods won’t work as well as anticipated (seeMicrosoft’s elusive particle). As we saw with the hard drive disk and semiconductor industries consolidation, those that have proven early technology successes, indicating an approach may become dominant, will be the first to be subsumed.

4. The “quantum software developer” generation emerges thanks to various layers of the quantum stack beginning to become accessible to developers:

  • Access to quantum hardware thanks to cloud providers such as Google, Microsoft and Amazon deploying new managed services.
  • Access to software frameworks thanks to various quantum developer kits released and open-sourced (Microsoft, Google, Baidu, IBM).
  • Access to applications thanks to companies such asZapata, QCWare and Cambridge Quantum, building quantum-ready applications and simulations across chemistry, finance, logistics, pharma and more that position companies to be ready to leverage new quantum hardware technologies as they become available.

5. Venture capital investments into QC hardware companies will stage inverse, focusing on late-stage, proven technologies, slowing down venture investments into new seed and Series A QC hardware companies. Most of the venture capital firms who go deep into new forms of computing have made their early-stage QC hardware bets, leaving few firms to target. At the same time, there will likely be an increase in venture investment into later stage (Series B and on) QC hardware companies as a result of material technical de-risking, end-to-end algorithmic computation, and a path to error correction and scale. As we saw with the semiconductor industry, we will see mainstream venture funds double down on dominant technical approaches.

6. A surge in commercial and government funding for QC companies thanks to a number of tailwinds:

  • More companies are starting to invest in being “quantum ready.” This ranges from internal training to build more profound awareness of the power of QC, to building quantum-ready applications and simulations for high-value problems, spending upward of $500,000- $1 million per application use case or algorithm.
  • An increasing number of companies are actively paying for access to early quantum hardware in order to build ahead of the curve, even if those systems aren’t capable of accurate or complete computations yet.
  • The National Quantum Initiative Act has earmarked $1.2 billion for quantum research. While these funds over 10 years will trickle mostly through university research programs, it will lead to a number of new spin-outs and shared research across the quantum computing ecosystem.
  • Various legislative draft proposals have been in the works for a “National Science and Technology Foundation,” replacing the NSF, that would spend $100 billion over five years on research related to technologies such as artificial intelligence, quantum computing and 5G telecommunications.
  • Our national security and defense priorities are beginning to crystalize around quantum computing use cases, mirroring many of the “quantum ready” intentions and use cases of enterprises as mentioned above, leading to new SBIR and OTA contract awards.

7. Geopolitics is going to push quantum computing into the mainstream. The intensifying competition from China, Canada, Australia, Finland and others, will introduce new existential risks around encryption and technological dominance. What if an adversary suddenly gained access to a computing power advantage? Similar to the politicizing of 5G and AI, pushing quantum computing into the national spotlight will increase pressure on our federal government to accelerate U.S.-based quantum leadership.

8. Post-quantum encryption will become a top priority for every CISO. Even if we are 10 to 15 years away from enough error-corrected qubits to break public-key encryption, this isn’t a zero-sum problem. Encryption lives in shades of grey, impacted by regional policies, encryption standards, depreciated or legacy installations, and more. For example, there are still over 25 million publicly visible sites relying onSHA-1, a cryptographic standard deprecated in 2011. While the most advanced encryption protocols are likely safe from the next few generations of quantum computers, the volume of deprecated yet active encryption protocols across the web is rampant and can go from a small nuisance to major problem overnight. NIST is leading the charge on a post-quantum cryptographic standard to be approved by 2024, in hopes of being fully deployed before 2030. In the meantime, best to upgrade to the latest protocols.

The beginning of every renaissance or golden age in history has started as a result of the intersections of capability, community, access and motivation. Quantum computing is entering the beginning stages of that age.

Technology breakthroughs are demonstrating stable, durable qubits that can be controlled and scaled. Underlying technologies such as arbitrary waveform generators, software defined radios and rapid FPGA development have accelerated the speed of development. New entrants are proving new methods and architectures as far superior to those of the past. An ecosystem is developing supporting the applications, distribution and funding to enable access to these systems. The industry is seeing firsthand the power and capability of a quantum system, racing to be the first in line to get their hands on it.

Quantum computing will represent the most fundamental acceleration in computing power that we have ever encountered, leaving Moore’s law in the dust. Welcome to the quantum age.

*Venrock is an investor in Atom Computing.


Eight Trends Accelerating The Age Of Commercial-Ready Quantum Computing was originally published on TechCrunch.

Source: https://techcrunch.com/author/ethan-batraski/

Here Comes Digital Sports Collectibles*

*on a blockchain!

Dapper Labs, in partnership with the NBA and the NBPA has launched the beta of their NBA TopShot crypto-collectible game. The first part of the experience allows consumers to buy packs of compelling NBA game moments. Users buy packs and try to complete various collections, build their teams and showcases, and start trading with others!

Image for post

The stats from the beta are fantastic and show consumers are gobbling up these collectibles like many of us do in other collectible categories:

  • The first 900 users have bought more than $1.2M of NBA packs in the last 5 weeks (>$1300 ARPU)
  • We have sold out of 96% of available packs thus far (22,000+ packs), often times within 3 minutes of each drop (pack pricing varies from $22–$250 each)
  • 51% of payment transactions conducted with with credit card, the remaining with crypto, demonstrating some nice traction from non-crypto users

The team has done a fantastic job “hiding” the complexities of wallets, crypto tokens, blockchains and account security. To users, this feels like a simple ecommerce or in-app purchase experience. After a pack purchase, the user watches an exciting unveil of their Moments:

My collection is pretty weak so far, but my son has collected some great ones:

Image for post

Surprisingly to many, this entire experience is a decentralized app built atop a layer one smart-contracts blockchain called Flow. Flow was designed and built by Dapper to handle mainstream and highly-scaleable games and other consumer experiences. Each moment you buy or trade is actually a non-fungible cryptotoken (NFT) sitting atop a smart contract. This enables true ownership in perpetuity, verifiable provenance, and allows other developers to build games in which your moments can be used.

The experience opens to the public (and the other 13,000+ people on the waiting list) sometime this fall, but, in the meantime, you can sign up here to jump the line.

In addition, Dapper is announcing the completion of an additional $12M financing to help scale Flow and TopShot, with participation by some notable NBA players like Andre Iguodala, Spencer Dinwiddie, Aaron Gordon, Javale McGee.

Back in November, 2018, I detailed our investment thesis in crypto collectibles and why I thought they might usher in a wave of consumer blockchain usage. We are about to see whether this view is right or not, but the early data looks super-promising…


Here Comes Digital Sports Collectibles* was originally published on Medium.

Source: https://pakman.com/

Building A Responsive Future For Space Launch, Venrock’s Investment Into ABL Space

The backstory of our privileged investment into ABL Space

Beginning of the new space economy

We are at the beginning of a new era in the Space economy. There are over 20k satellites filed with the FCC for expected launch within the next 5–8 years. Global Space activity was estimated to be $360 billion in 2018, with commercial space revenues representing 79% of total space activity, expecting to double over the next five years.

Over 90% of the satellites filed to launch are in the ‘small satellite’ (150kg — 750kg) class. Fueled by demand for global broadband, space-based observation & tracking, increased NASA activity, and more, the need for lower-cost capabilities and faster times to orbit has become paramount. This is a radical shift from the large, expensive, and time-consuming builds of GEO based satellites. Our investment in Astranis was an example of this.

In parallel, US defense priorities have shifted away from large satellite clusters in an attempt to ‘disaggregate’ existing space infrastructure into small satellite constellations to address emerging threats and create increased resilience. Large satellites are challenging to defend, while Russia and China have demonstrated ground and in-orbit anti-satellite weapons. Increasingly our space network resiliency will be reliant on our ability to respond, in ways that demand the launch of new satellites from austere locations within 24–48 hours to limit operational interruption

Existing launch capacity

The emerging demand for small satellites, today, doesn’t mesh well with established launch system. Previously, launch vehicles were designed for a capacity between 8,000kg to 25,000kg. Small Sat operators simply couldn’t afford to be primary payloads and were left either as secondary payloads on a launch that wasn’t a good fit for their satellite orbit — costing the satellite precious fuel — or waiting months for a launch that would be more direct. The lack of options creates a bottleneck for next generation small satellite operators.

This gap created an opportunity for new companies to offer launch capabilities specifically to the small satellite market’s needs. To provide a reliable and affordable solution, tailored to this spacecraft class, launching into the required orbit, at the time required.

A challenged small launch market

In the past few years projects emerged aimed at addressing this growing need. The conventional wisdom amongst investors went as far as saying the small satellite launch market was oversaturated, with over a hundred projects under development.

In reality, only a handful of these efforts were serious, with any meaningful staffing, capital raised, or technical progress to show for. Among them, the technology and business models were highly variable, requiring hundreds of millions of dollars in capital to get to the first launch. With this information we believed the market opportunity was still available for outstanding companies.

We searched for a company going after the right payload/market, that could stand up capabilities responsively, with a technical advantage, and at a low program cost. But we saw several challenges in the existing approaches across one or more of the following dimensions:

  1. Technical risk: the architectural approach was ‘too innovative,’ lacked engineering tractability, reinvented the propulsion or manufacturing techniques creating too many unknowns, or designed for a derivative platform.
  2. Unit Cost risk: The BOM of the rocket was too expensive to generate any meaningful margins, and would require too many launches per year to break even.
  3. Market mismatch: They designed the launch vehicle to go after the nano and cubesat segment of the market (100kg — 150kg), whereas the market centered around the small class (150kg — 750kg).
  4. Capital risk: The capital requirements of getting to the pad would cost well over $100M, requiring too much capital to get to the first launch and creating capital overhang for early investors to see any meaningful multiple.
  5. Team risk: The teams went about designing architectures that lacked the concepts of being lean, modular, and iterative in their designs, costs, and execution.

Investing in the small launch market

This industry risk assessment helped us hone an investment thesis that probed what truly mattered in building an enduring launch business:

1. Are they going after the right market segment with a differentiated price/capacity and responsiveness that will attract significant customer demand? (product/market fit: capacity, price, responsiveness)

2. Are the architecture and technical approach meaningfully differentiated, and can be executed with higher success, at a manufacturing cadence that they can sustain an advantage in the market? (tech advantage: technical risk, manufacturing risk, differentiation)

3 Can this be executed at a lower total program cost per launch unit economics that looks more like software margins than classic aerospace? (capital advantage: total program cost, unit economics)

4. Does this team have the ‘special sauce’ of deep technical capabilities, strong commercial instincts, and can successfully raise capital as needed. (team: execution, commercialization, capital)

Until that point, we hadn’t found a company that demonstrated the product/market fit, technical advantage, economics, and team to pull it off — proving that reaching space can be simple, efficient, and routine.

Introducing ABL Space

Harry and Dan started ABL Space with the belief that you could build a simpler, lower cost, and self-contained system, with far less capital than launch companies were spending today. Building on years at SpaceX, they saw that launching small satellites can be made flexible, reliable, and more affordable than ever.

Image for post
Harry & Dan, co-founders at ABL Space

At SpaceX, Harry led the grid fin reentry steering system development effort, was on console for many Falcon 9 launches (including the first successful landing in 2015) and managed multiple large production teams. Dan, previously a quant trader and data scientist working across public markets and venture capital, had a unique lens on how to structure and position a launch company. They formed a powerful duo moving fluidly between technical and commercial domains, identifying opportunities, and building technical teams to pursue them. Their secret weapon was their friendship. Harry and Dan met as freshmen at MIT and developed the deep trust that can only be formed through the ups and downs of thirteen years of career discussions, relationship advice, ski mountaineering and late nights. It’s this trust that lets them move swiftly building different parts of the business, yet seamless across domains and trade responsibilities, building a company that feels a lot more like software and engineering than rockets and science.

Using those principles, ABL Space set out to build a launch platform with a philosophically different approach. First, they started the design with the constraints of having the lowest possible unit costs, the highest manufacturability, and the least technical risk. The result is a simple, reliable rocket that can be built in 30 days — far faster and more reliable than more exotic architectures. Second, they operate more like a software company using lean engineering principles and modular designs, focusing on deployability and operations as part of the design requirements.

They believed you should be able to launch rapidly (in as little as 30 minutes after call up) and from anywhere, setting a new standard for resilience. Using proven design and engineering principles, focusing on building a long-lasting business, a reliable, affordable, and on-time launch is achievable.

While new launch companies clamored into the cube/nano/microsat market, focusing on 200kg and below, ABLlooked beyond, seeing the accelerating growth of the smallsat (500kg — 1200kg) market. They positioned themselves as one of the only launch systems built for the incoming market demand.

As a result, the team designed a ruggedized, low-cost, easy to deploy launch system, with a 1,200kg capacity, called the RS1, as an end-to-end product, rather than just a launch service. Able to ship as an entire system, called the GS0, designed to rapidly launch from austere locations using a containerized system (including launch vehicle, propellent, mission control, and payload) to be set up within 24 hours on any 100 x 50′ flat concrete pad. With unit economics that looks a lot more like software than rockets.

Both a late entrant and first mover, ABL progressed at a pace we had never seen before. They only needed nine months from engine program concept to hotfire, and then only nine months to stage test. While others spent hundreds of millions trying to solve the same problems, ABL maneuvered three times faster than any comparable effort, with almost no wasted effort, capital, or hardware.

Image for post
Integrated RS1 Stage 2 test campaign

Built and led by a very senior and respected team of SpaceX engineers, who fundamentally thought more like rocket engineers than rocket scientists. With the ‘special sauce’ of deep technical capabilities, commercial instincts, and how to bring it all together. They were focusing on delivering real value to their customers rather than on press and marketing.

This is just the start, as they extend the same radically different engineering culture to other major technology areas in the future.

As a result, we were fortunate to have been able to invest into ABLs seed round with our friends at Lockheed Martin in 2019, and earlier this year, we at Venrock, had the privilege to lead ABLs Series A and join the board, alongside Lockheed Martin, New Science, and Lynett Capital.

A quick update since we first invested

Since we first invested, ABL was awarded DoD contracts totaling $44.5mm over three years and executed a three-year CRADA with the AFRL Rocket Propulsion Division to test their rockets at Edwards Air Force Base. They activated an integrated RS1 Stage 2 test campaign and are exceeding engine performance targets. They’ve grown their team to over 80 world-class engineers and operators and expanded into full-scale production facilities in El Segundo and Mojave, all by the third anniversary of the company’s founding.

We couldn’t be more excited to partner with Harry, Dan, and the rest of the ABL team. Ad Astra!


Building a responsive future for space launch, Venrock’s investment into ABL Space was originally published on Medium.

Source: https://medium.com/@ethanjb

Using The Attack Cycle To Up Your Security Game

Like the universe, the attack surface is always expanding. Here’s how to keep up and even get ahead.

Most criminal activity is designed to elicit a payoff for the perpetrator, and crime on the Internet is no different. As new surfaces emerge, previous attacks are reconstituted and applied. Cybersecurity tends to follow a cycle, once you know when and what to look for. To (poorly) paraphrase Bob Dylan: You don’t need a weatherman to know which way the wind blows. You just need the experience of being around for a few of these cycles.

The New-New Thing
When we think about cybersecurity threats and associated mitigations, there are three key factors to consider:

  • Attack Surface: The thing that an attacker attempts to compromise, such as a laptop, smartphone, or cloud compute instance.
  • Attack Sophistication: The methods and attack types, including persistence, zero-days, phishing, and spear phishing.
  • Threat Actors: Who the attackers are and their implied motivations, like nation-states seeking intellectual property or organized crime engaged in ransomware.

The attack surface is like the universe: in a perpetual state of expansion. While your laptop is (hopefully) running a recent operating system version with (kind of) timely patches, there’s a good chance that your bank’s ATMs are running Windows XP. But after Microsoft retired XP support in 2014, 95% of ATMs were still running the operating system. That number hadn’t improved much four years later and hackers were gleefully demonstrating these machines spewing cash. This means an IT security team must live in the past and the future.

A solution to a modern problem can introduce a new set of challenges: a new console to learn and new alerts to integrate. However, this presents an excellent, and often necessary, opportunity to repurpose existing budgeted spending. Examples of this include the erosion of traditional antivirus by endpoint detection and response (ERD) and the move from physical Web application firewalls (WAF) to software-based NG-WAFs.

Attack sophistication is directly proportional to the goals of the attackers and the defensive posture of the target. A ransomware ring will target the least-well-defended and the most likely to pay (ironically, cyber insurance can create a perverse incentive in some situations.) because there is an opportunity cost and return on investment calculation for every attack. A nation-state actor seeking breakthrough biotech intellectual property will be patient and well-capitalized, developing new zero-day exploits as they launch a concerted effort to penetrate a network’s secrets. 

One of the most famous of these attacks, Stuxnet, exploited vulnerabilities in SCADA systems to cripple Iran’s nuclear program. The attack was thought to have penetrated the air gap network via infected USB thumb drives. As awareness of these complex, multi-stage attacks has risen, startups have increased innovation – such as the behavior analytics space where complex machine-learning algorithms determine “normal” behaviors and look for that one bad actor.

Threat actors are the individuals and organizations engaged in the actual attack. In the broadest sense of the term, they are not always malicious. I have seen companies hobbled by an adverse audit finding or a compliance lapse. When I was early in the data loss prevention (DLP) market, solutions were sold to detect insider threats stealing intellectual property. This was (and still is) a hard use case to sell against, and it wasn’t until regulations and legislation emerged that required companies to notify if they’d been breached and lost personally identifiable information that the DLP market became a must-have security solution.

It is possible for solutions to advance independently of new threats, actors, or surfaces, frequently when there is a breakthrough in underlying computational capabilities. Examples of this include the use of machine learning to identify file content in order to prevent data loss without rigid rulesets or machine vision to read text from an image-based spear-phishing attack. 

It’s All Been Done
In my experience, a new market for cybersecurity solutions is triggered by an expansion of the attack surface. This could be something as seismic as AWS or the iPhone, or as localized as a code framework like Struts or React. With a new attack surfacecomes new management requirements and new attackers, exploiting vulnerabilities and flaws in human interactions. The ensuing data, financial, and reputational losses cause new cybersecurity solutions to emerge.

Typically these solutions will also improve on previous generations, whose limitations become obvious when deployed on a new attack surface. Examples are plentiful. IT system compliance and vulnerability management was confined to inside the enterprise, scanning with agents and crawlers (Qualys, Tenable). With the emergence of public cloud, startups (such as Evident.io and Lacework) appeared to scan for vulnerabilities through native APIs provided by cloud environments.

There are typically four ways to handle risk. You can accept, mitigate, transfer, or avoid it.Brought to you by Reciprocity, Inc

For its part, antivirus started as relatively simple signature-based protection; if an agent detects a specific executable or behavior, prevent it. But as the attack sophistication increased, next-generation endpoint protection emerged with specialization for file-less attacks, in-memory exploits, etc.

Data loss prevention began with simple detection of structured content (Social Security numbers, credit card numbers) in email, Web posts, and end-user devices. There is now a new breed of vendors focused on data leakage from cloud-based services (outside the enterprise datacenter) such as Slack, Box, and Github, offerings that didn’t exist when the previous generation of solutions came to market.

The Next Thing
Security practitioners must consider the cybersecurity requirements when new surfaces are deployed or business models change. They should ask four questions to help clarify risks:

  • Has your business changed in a way that increases the likelihood that you will be attacked and/or the attacker sophistication will change?
  • What baseline data have you historically collected, and how will you get the same information from this new surface?
  • What of value is contained with, or generated by, this new surface?
  • How could this new surface be exploited and defended, and does it impact existing surfaces?

The initial question should be asked on a routine basis. COVID-19 changed attacker interest for many small biotech companies in a way their security posture did not anticipate, resulting in an uptick in attacks by nation states seeking an edge in new treatments and a potential vaccine. The second question is often the one that solution providers initially race to address because it is the most obvious. If there’s an enterprise compliance policy requiring potential vulnerabilities to be remediated, security organizations still must identify those vulnerabilities regardless of where the underlying system is running. 

The third question is often the one that gets forgotten. Many data breaches have occurred because a new surface is deployed and it gives attackers an expanded attack surface that allows for access to an existing, previously “secured” platform. The Target breach is the most well-known example of this, but countless other breaches have happened because of something as trivial as a mis-configured network setting on an Amazon Web Service’s Virtual Private Cloud.

The recent, near-universal move to remote work will no doubt result in new attacks against home networking infrastructure. It’s important to remember that attackers are not interested in doing more work than is necessary (the ROI calculation), the attack surface will shift the “weakest link” to exploit. Asking these questions and anticipating possible vulnerabilities is critical to getting ahead of the next ransomware attack or zero day-driven intellectual property robbery.


Using the Attack Cycle to Up Your Security Game was originally published on Dark Reading.

Source: https://www.darkreading.com/

Time To Build Robots For Humans, Not To Replace

Thinking about the future of robots and autonomy is exciting; driverless cars, lights-out factories, urban air mobility, robotic surgeons available anywhere in the world. We’ve seen the building blocks come together in warehouses, retail stores, farms, and on the roads. It is now time to build robots for humans, not to replace them.

We still have a long way to go. Why? Because building robots that intend to work fully autonomously in a physical world is hard.

Humans are incredibly good at adapting to dynamic situations to achieve a goal. Robotic and autonomous systems are incredibly powerful at highly precise, responsive, multivariate operations. A new generation of companies is turning their attention to bringing the two together, building robots to work for humans, not replace them, and reinventing several industries in the process.

Innovation through limitation

New methods of ML, such as reinforcement learning and adversarial networks, have transformed both the speed and capability of robot systems.

These methods work extremely well when:

  1. Designed for well-known tasks.
  2. Within constrained environments and limited variable change.
  3. Where most end states are known.

Where the probability of unforeseen situations and ‘rules’ are low, robots can work miraculously better than any human can.

An Amazon robot-powered warehouse is an excellent illustration of well-characterized tasks (goods movement), in constrained environments (warehouse), with limited diversity (structured paths), and all end states are known (limited task variability).

Robots in a complex world

What about in a less structured environment, where there are greater complexity and variability? The probability of errors and unforeseen situations is proportional to the complexity of the process.

In the physical world, what is a robot to do when it encounters a situation it has never seen before? That question conflicts with the robots’ understanding of the expected environment and has unknown end states.

The conflicted robot is precisely the challenge companies are facing when introducing robots into the physical world.

Audi claimed they would hit level 3 autonomy by 2019 (update: they recently gave up). Waymo has driven 20 million miles yet operationally and geographically constrained.

Tesla reverted from a fully robotic factory approach back to a human-machine mix, the company stating, “Automation simply can’t deal with the complexity, inconsistencies, variation and ‘things gone wrong’ that humans can.”

Yes — this complex issue will be figured out — but the situation is not solved yet.

To solve these problems in the physical world, we’ve implemented humans as technology guardrails.

Applications such as driverless cars, last-mile delivery robots, warehouse robots, robots making pizza, cleaning floors, and more, can operate in the real world thanks to ‘humans in the loop’ monitoring their operations.

Humans are acting as either remote operators, AI data trainers, and exception managers.

Human-in-the-Loop robotics

The ‘human in the loop’ has accelerated the pace of technology and opened up capabilities we didn’t think we would see in our lifetime, as the examples mentioned earlier.

At the same time, it has bounded the use cases to which we build. When we design robotic systems around commodity skill sets, the range of tasks is limited to those just those skills.

Training and operating a driverless car, delivery robot, or warehouse robot all require the same generally held skill sets.

As a result, what robots are capable of today primarily cluster around the ability to navigate and identify people/objects.

As these companies bring their solutions to market, they quickly realize two realities:

(1) Commodity tasks make it easier for others to also attempt a similar solution (as seen with the number of AV and warehouse robot companies emerging over the past few years).

(2) High labor liquidity depresses wages, thus requiring these solutions to fully replace the human, not augment, in high volumes to generate any meaningful economics. E.g., Waymo/Uber/Zoox needs to remove the driver and operate at high volumes to turn a profit eventually.

The result of the commodity approach to robotics has forced these technology developers to completely replace the human from the loop to become viable businesses.

Changing the intersection of robotics and humans

The open question is: is this the right intersection between machine and human? Is this the best we can do to leverage the precision of a robot with the creativity of a human?

Expert-in-the-Loop robotics

To accelerate what robots are capable of doing, we need to shift focus from trying to replace humans, to building solutions that put the robot and human hand-in-hand. For robots to find their way into critical workflows of our industries, we needed them to augment experts and trained technicians.

Industries such as general aviation, construction, manufacturing, retail, farming, and healthcare could be made safer, more efficient, and more profitable. Changing the human’s role of operator and technician to manager and strategist.

Helicopter pilots could free themselves from the fatiguing balance of flight and control management. Construction machine operators could focus on strategies and exceptions rather than repetitive motions.

Manufacturing facilities could free up workers to focus on throughput, workflow, and quality, rather than tiring manual labor. Retail operators could focus on customer experiences rather than trying to keep up with stocking inventory.

These industries all suffer from limited labor pools, highly variable environments, with little technology, and high cost of errors. Pairing robotic or autonomous systems that work hand in hand with the experts could invert from the set of dynamics compared to commodity use cases.

Companies could build solutions that need only to augment the operator, not replace him or her, to meaningfully change the economics of the operation.

Building for an expert-robot generation

The current generation of technology innovation is starting, with a new generation of companies using robotics and autonomy to change the operating experience across industries.

  • Innovative companies such as Skyryse* with complex aircraft flight controls.
  • Built Robotics in the construction.
  • Path Robotics in manufacturing.
  • Caterpillar in mining.
  • Blue River in agriculture.
  • Saildrone in ocean exploration.
  • Simbe Robotics* in retail.
  • Intuitive Surgical in healthcare.

Robot solutions that share many key dimensions:

  • Introduce advanced levels of automation or autonomy that can pair with its human operator.
  • Deliver at least two of the three value dimensions: safer operation, improved cost of operation, high total utilization of assets.
  • Shift the operators’ time to higher-value tasks; eventually to manage across multiple functions in parallel.
  • Primarily software-defined across both control and perception systems.
  • Easily retrofit into customers’ assets base at price points less than 20% of the cost of the underlying asset.
  • Can go to market ‘as a service’ with recurring revenue and healthy margins.

Technology has empowered humankind to be capable of the impossible.

The impossible means we can make more complex decisions at orders of magnitude more precision and speed. Yet so many industries still rely on human labor and operations over human ingenuity and authority.

As the world adapts to social distancing and remote work, it’s more important than ever to leverage technology as our proverbial exoskeletons to maximize what humans are great at, and let technology do the rest.

*Venrock is an investor in Skyryse and Simbe Robotics


Time to Build Robots for Humans, Not to Replace was originally published on ReadWrite.

Source: https://readwrite.com/

The New Employee Perimeter

The World Turned Upside Down

Historically, corporate networks were designed for employees seated in physical offices. Vendors like Cisco and Juniper built large companies around these “campus networks”. Employees physically connected their desktops to the network and accessed applications run in on-premise data centers. With WiFi-enabled laptops users got a taste of freedom; so enterprises deployed Virtual Private Networks (VPNs) for access back to on-premise applications from homes and coffee shops. And then many of those applications moved to the cloud. Yet most organizations still have security architectures from the time of tower computers attached to physical ethernet ports.

Cisco’s recent earnings provide the best empirical evidence that network architectures are being inverted by the COVID-19 pandemic. Sales of networking infrastructure is down, while products like VPNs are up significantly. With the world sheltering at home, suddenly remote workers are the rule, not the exception. This shift outside the traditional network perimeter, which began long before COVID-19, presents an opportunity for organizations to finally upgrade their network and security architectures for the way people work today, rather than 20 years ago, and protect themselves against the most common threats.

The New Rules

The new rules for user and network security must assume that the network is temporal, that users are as likely to be on the office network as their WiFi at home. And the devices they use to do work – whether it’s checking a sales forecast or reviewing the financial model for an acquisition – happen across a heterogeneous mix of operating systems and form factors. Why focus on end user security? The vast majority of attacks now prey on the weakest link in IT security: people. A spear phishing email lands, a user clicks through a link, a threat is persisted, ransomware and IP theft proliferate, followed by pandemonium. To make matters worse, these attacks are seeing a dramatic increase during the COVID-19 seeded confusion.

Rule 1: Users access resources (mostly) in the cloud. Enterprise applications now live in the cloud, but the implications need to be operationalized. There’s no need to send SaaS traffic through the VPN back to a corporate network; it reduces performance and increases bandwidth costs. Intelligently route the traffic to where it needs to go, don’t bludgeon it into a VPN.

Rule 2: Users leverage a diverse collection of devices. Employees expect a corporate device (laptop) and the ability to add their own devices, such as smartphones and tablets. IT departments must assume this is standard behavior and deploy solutions and policies that support this expectation.

Rule 3: The network perimeter shrinks to the end user. You aren’t going to ship Salesforce a firewall and ask them to install it in front of your CRM instance, so your protections need to be rooted in the end user’s experience. Threats need to be detected and mitigated at the end user level, especially in a post-COVID world where family members are likely sharing devices to accomplish distance learning, remote interactions, and the like.

Rule 4: Consider the implications of Bring Your Own Pipe (BYOP). With everyone working from home, the attack surface has been skewed. Home routers, unpatched inkjet printers, security cameras, and smart televisions all represent vectors by which an attacker can compromise valuable intellectual property. Only three months ago Netgear announced a number of critical vulnerabilities affecting popular consumer routers. It’s critical that organizations understand the environmental threats that exist surrounding their end users.

Some of these rules began to emerge well before COVID-19 became a pandemic, first with Google’s BeyondCorp project and then the larger security industry trend towards Zero Trust (which is currently trapped in marketing buzzword purgatory). But their adoption has been slow relative to the expansion of traditional enterprise networking, especially in organizations that were not born in the cloud, and that must change. 

Check Your Posture

To succeed in this new world, organizations must embrace simplified user and device posture-centric security. There are two domains of focus: the end user and the resource they are interacting with. Rather than binary decisions, solutions should consider key variables to decide if access is granted, and access can be allowed on a granular level (e.g. a user on a personal device on a guest network can access their own HR data, but not company intellectual property).

  • User authentication: leveraging a Identity as a Service offering, make sure the user is who they claim to be – including multi-factor authentication (MFA), passwordless authentication, etc.
  • Device posture: is the device a corporate managed device? Does it have the latest software updates and patches installed? Has a recent anti-malware scan completed with a clean bill of health? 
  • Normal user behavior patterns: does this user normally access these resources, at these times, from these locations? Did the user just appear to access other resources from a geographical location that is impossible to reconcile?
  • Target: what is the enterprise value of the asset the user is attempting to access? Does it contain proprietary information, personally identifiable information? Is it a known attack vector that is unpatched and vulnerable?

By leveraging the above items, policies can be dramatically simplified away from complex and antiquated network-centric policies. 

New World Order

By deploying solutions that answer these questions, organizations can build protective moats around their end users and minimize the damage done by an attack. Organizations can also begin to treat all users as equals, regardless of the device or network they’re operating from. COVID-19 has led to new working norms, and we must embrace the new rules for end user-centric security to keep information and employees safe.


The New Employee Perimeter was originally published on LinkedIn.

Source: https://www.linkedin.com/in/todd-graham-720276/