Category Archives: Technology – Insights

Earn While Doing What You Love

When I was in high school I had the good fortune to earn a spot on the Jones Beach Lifeguard Corps. It was a job that was every bit as fun as it sounds, and because we were unionized state employees, it paid decently too. Our days involved sitting on the lifeguard stand every other hour, staring intently at our patch of ocean, followed by an hour off, during which we were encouraged to exercise, take out the surf lifeboats, or “patrol” the sand. I remember commenting to one of the grizzled veterans several decades my senior that “I would do this job for free.” He looked at me with a knowing eye, tinged with the pitying look of a chess player who knows they are at least two moves ahead of you, “but the thing is kid, they do pay us to do this.” Those summers were an early lesson in the harmony of getting paid for doing something you truly love.

Because I am passionate about entrepreneurship and software, I am still earning a living doing what I love, as an early stage technology venture capitalist. For many people, however, neither business nor technology sparks joy. For them, teaching yoga, or fitness, or cooking, or magic, or art, or you-name-it, is what they love. Being chained to a laptop with seven browser tabs open so they can create email campaigns, manage customer lists, process payments, and balance their business accounts, is at best a necessary evil to enable them to earn income pursuing their passion.  

The aforementioned state of affairs has long held, but in March of 2020 Covid19 threw a curveball at small businesses everywhere, but especially those dependent on serving clients face to face. All of a sudden small business owners needed to go virtual by figuring out how to use Zoom, accept online payments, and hopefully make up some of their lost revenue by serving a potentially bigger, geographically dispersed audience. And for the employees of these small businesses, many of them saw their work hours shrink, or faced painful furloughs. For some of these employees, however, necessity led them to branch out on their own, serving clients directly through video conferencing, with neither the limitations nor safety net of working for someone else. Add to the mix countless others who saw the opportunity to turn their personal hobby into an income producing “side hustle” as virtual services quickly went mainstream.

Enter Luma, a startup founded, quite appropriately, by two engineers who had* only ever met over video conference. In March 2020, Dan and Victor quickly saw the need to help solopreneurs, small businesses, and groups invite people to virtual events, accept payments, and manage customer relationships. They applied their skills as full-stack programmers to quickly launch an MVP, which met with quick success. Because Zoom was designed primarily for business meetings and webinars, Luma saw an opportunity to leverage Zoom for many other use cases by enabling customizable event pages, CRM and membership management, subscriptions, payments, and easily understandable analytics for event hosts. Luma has been used for hosting fitness classes, magic shows, cooking classes, writers workshops, live podcasts, PTA speaker series, and a myriad of other activities. The list of future features, use cases, and target user segments grows longer everyday.

While Dan and Victor were quick to jump into action with Luma back in April, now that Zoom has become a verb they are hardly the only ones to see the need for a virtual event platform. What drew me to invest in these two founders, however, is their incredible ability to get stuff done, their high bar for quality and customer service, and their relentless intellectual curiosity driving them to best understand how to improve the lives of their users, so that hosts and guests alike can spend more time doing what they love, while the fiddly bits of technology and managing a business become nearly invisible. 

One great example of Dan and Victor’s commitment to customer centricity was the following. One evening a few months ago I was about to log on to a parent education event hosted by Common Ground Speaker Series. What I soon realized was that I had failed to pre-register and so I was missing the appropriate Zoom link. I found a live chat help button, not knowing whether anyone from Common Ground would actually be there at this late hour, and lo and behold Victor pops up in the chat within seconds and immediately works behind the scene with the event host to get my registration accepted so I could receive the link. Victor himself was providing live support to an event host at the end of a day filled with coding new features, working on strategic planning, creating marketing campaigns, recruiting team members, and donning dozens of other hats as a startup founder does. All that, and I’ve never seen either Victor or Dan without a huge smile on their faces. Luma’s founders embody the commitment, optimism, and truth seeking that great founders embrace, which is ultimately why we invested in them, and are so excited for the journey ahead. Luma helps people earn a living doing what they love. I am fortunate to earn my living helping great founders like Dan and Victor.


*Dan has since relocated to San Francisco and the two founders are now bubble-mates working together shoulder to shoulder.


Earn While Doing What You Love was originally published on VCWaves.

Source: https://vcwaves.com/

Building Commercial Open Source Software: Part 3 — Distribution & GTM

Building a Commercial Open Source Company

In our time investing and supporting open-source companies, we’ve learned several lessons about project & community growth, balancing roadmap and priorities, go-to-market strategy, considering various deployment models, and more.

In the spirit of open source, we wanted to share these learnings, organized into a series of posts that we’ll be publishing every week — enjoy!

PART 3: Sequence your distribution & GTM strategy in layers

Image for post

1. Vibrant communities make for the best lead generation:
The open-source popularity of a project can become a significant factor in driving far more efficiency and virality in your go-to-market motion. The user is already a “customer” before they even pay for it. As the initial adoption of the project comes from developers organically downloading and using the software, you can often bypass both the marketing pitch and the proof-of-concept stage of the sales cycle.

2. Start bottoms up, developers first: Focus on product-led growth by building love and conviction at the individual developer level. Make it easy to sign up, deploy, and show value. Leverage developer referrals, community engagement, content marketing, and build the product-led growth mentality into each function of your company. Nailing the developer experience can lead to growth curves that look much more like consumer businesses than enterprise, and lead to much larger deals later on.

3. Nail bottoms up before enterprise selling: You can focus on product-led growth (bottoms up) or account-based marketing (top-down), but not both at the same time. Start with product-led growth to build an experience that developers love. Once you’ve reached some critical mass with a flywheel of developer acquisition, begin to introduce account-based marketing, starting with the expansion of existing customers to learn the enterprise sales motion before going after new accounts.

4. Developer first doesn’t mean developer-only: While nailing the developer-first experience is key to driving strong customer growth, it’s often not sufficient when trying to scale the project into larger-scale deployments. Transforming from a proof of concept to multiple large scale deployments across the customer’s organization requires a different set of decision-makers and requirements (i.e. security, policies, control, SLAs). Be sure to understand how the needs of the organization may differ from the needs of the developer when planning how to expand deal sizes and go after larger customers.

5. Build your sales funnel based on project commitment: Customers will come in three coarse flavors: (1) already deployed OSS project internally (2) starting to deploy OSS project internally, (3) decided to adopt OSS project. Design the sales motion tailored to the customer journey in order to focus on solving the right problems and challenges.

6. Target the ‘right’ developer: It’s critical to know who you are solving for and what you are solving for them. Going after the wrong developer persona can make a critical difference in whether the developer community understands and embraces your solutions, or not. Is this a solution for DevOps or data engineering? Technical business users or Data Scientists? An example data infrastructure project could be seen as (a) making it easier for DevOps to manage, (b) shifting the power from DevOps to engineering, c) helping data engineering leverage better code patterns, and (d) making it more secure for SecOps to manage data access. Obviously, all (4) have very different problems, with different values associated to them, but are all value props of the same solution. Focusing on the right persona, with the most painful problem, where you can continually layer value over time, is critical to building wider community love and commercial adoption.

7. Sell impact, not solutions: Help understand the total cost of ownership (TCO) of your solution vs an existing, closed, or in-house system — this matters and is rarely done well by the customer when making buy/build decisions. Understanding the value and ROI your service delivers, both hard and soft, allows you to sell on impact to the business, and not on a technical solution. Are you saving developer headcount? Increasing developer productivity? Reducing infrastructure costs? Cost-take out of a more expensive or legacy system? Being clear on the cost savings and velocity benefits of your solution drives up customer contract values.


Building Commercial Open Source Software: Part 3 — Distribution & GTM was originally published on Medium.

Source: https://ethanjb.medium.com/

Building Commercial Open Source Software: Part 2— Roadmap & Developer Adoption

Building a Commercial Open Source Company

In our time investing and supporting open source companies, we’ve learned several lessons about project & community growth, balancing roadmap and priorities, go-to-market strategy, considering various deployment models, and more.

In the spirit of open source, we wanted to share these learnings, organized into a series of posts that we’ll be publishing every week — enjoy!

1. Solve for the homegrown gap

When developers struggle with deploying an open-source project into their complex internal environments or infrastructures, they build homegrown solutions instead. Solving for key areas turns developer engagement into commercial customers. This means it needs to be as easy and seamless as possible to set up and deploy in order to start demonstrating value. Whether it’s providing Kubernetes operators, specific integrations, CLIs, or UIs, make it dead easy to deploy.

2. Offer an enterprise-ready package

Open source is designed for the community, by the community, and by definition wasn’t designed to work out of the box in the enterprise. Comprehensive testing, a certification program, performance guarantees, consistency & reliability, cloud-native, and key integrations are all substantial value propositions on top of an open core.

3. Layer value on top of the open core

Focus on ways to magnify the value of the open core within the customer’s organization; make it easier to deploy, operate, manage, and scale. Adding capabilities such as rich UIs, analytics, security & policies, management planes, logging/observability, integrations, and more make it easier to work within increasingly complex customer environments. For example, Elastic built a number of products on top of their core that made it easier to deploy and manage such as Shield (security), Marvel (monitoring), Watcher (alerting), native graph, and ML packages.

4. Narrow focus until you become ‘the’ company:

Focus narrowly on making it easy and obvious for every company in the world to be on your open core, and use it to grow both the community and customer love to become the [open source project] company. Avoid splitting focus until you’ve generated enough market adoption (i.e. $25M in ARR) to declare yourself the winner. Databricks is the Spark company, Astronomer is the Airflow company, Confluent is the Kafka company, by focusing on developing, growing, and scaling the open core.

5. Go horizontal over vertical

Focus on modular, horizontal capabilities that apply to all engineering organizations, of all sizes, that make the open core and enterprise solution more robust, manageable, performant, and scalable. Horizontal would include such capabilities mentioned earlier such as analytics, logging/observability, management tools, and automation, but also enabling new capabilities that amplify the value of the core. This might include improved capabilities for data ingress/egress, replacing existing infrastructure components for tighter integration, or moving up/down the stack. Vertical capabilities are focused on specific customer segments or markets, such as offering a ‘financial services package’ or specific offerings designed for large enterprises. The illustration of this has been most recently evident in diverging strategies of Puppet vs Chef and led to Chef’s low revenue multiple acquisition.

6. Optimize for developer usage over revenue

In the early commercial days, usage counts more than revenue. You are looking at downloads, stars/fork ratio, contributor velocity on the open source project, and beginning to see reference customers adopting your project on the commercial side. Developer engagement is key to building customer love, deep adoption, and lock-in. These lead to eventual expansions, referrals, customer champions, and all the goodness.

7. Without telemetry, you’re flying blind

Without understanding how the project is being used, the number of deployments/developers/organizations, service utilization, and adoption curves, it’s difficult to prioritize fixes or features. To better observe, manage, and debug before they become major issues, implementing lightweight telemetry can offer continuous, unfiltered insight into the developer’s experience. Two key projects, OpenCensus and OpenTracing have merged to form OpenTelemetry, enabling metrics, and distributed traces that can easily be integrated into your project.


Building Commercial Open Source Software: Part 2— Roadmap & Developer Adoption was originally published on Medium

Source: https://ethanjb.medium.com/

Building Commercial Open Source Software: Part 1 — Community & Market Fit

Since the early 90s, with the emergence of the MIT free software movement and popularity of Linux, there has been an accelerating shift away from proprietary, closed software to open source.

Today the open source ecosystem has over 40M registered users, 2.9M organizations, and 44M projects on Github alone. Just in 2019, 10M new developers joined the GitHub community, contributing to 44M+ repos across the world.

Image for post

Open source continues to be the heartbeat of the software community and one of the largest and growing segments of the market by IPO, M&A, and market cap; with new projects emerging well beyond low-level systems to machine learning, data infrastructure, messaging, orchestration, and more.

Companies such as Hashicorp, Astronomer*, Puppet, Confluent, and Databricks are a new approach to commercial open source, with a focus on deploying their open cores to the broad developer community and the largest companies in the world with enterprise-ready needs attached to them. All while actively contributing to the community and gradually opening up more of the closed platform to the community — and building big, meaningful businesses along the way.

These new approaches are building platforms that wrap open core packages with support, enterprise-focused capabilities, and enterprise-level stability to transform a solution into a high availability, horizontally scalable, modular service that can fit into any set of cloud or infrastructure needs. Riding a tidal wave of community growth and demand as the underlying projects proliferate across developers and enterprises.

While there is no one-size-fits-all approach, each of these companies have navigated a complex maze of decisions as they built and scaled their solutions: deciding when building a commercial solution made sense, ensuring the community still stayed in primary focus, remaining open yet balancing the needs of the enterprise, deciding when to focus on bottoms-up or introduce enterprise-wide selling, and how to remain competitive against the cloud providers.

Building a Commercial Open Source Company

In our time investing and supporting open source companies, we’ve learned several lessons about project & community growth, balancing roadmap and priorities, go-to-market strategy, considering various deployment models, and more.

So in the spirit of open source, we wanted to share these learnings, organized into a series of posts that we’ll be publishing every week — enjoy!

  • Part 1: Building blocks to a commercial open source offering
  • Part 2: Focus your commercial and OSS roadmaps on developer adoption
  • Part 3: Sequence your distribution & GTM strategy in layers
  • Part 4: Your deployment model is your business model
Image for post

1. Find Community-Project Fit

Ultimately the success of a commercial open source company first relies on an active community of contributors and developers supporting the project and distributing it to more and more developers. This means building a project that the community has decided is worthy of their participation and support is the most important goal starting out. The keys to building a vibrant community around your project center around earning developer love by solving an important and widely felt problem, inspiring and supporting others to actively contribute, and giving reason for a passionate community to begin to form around it; whether it’s around integrations, an ecosystem built on top of it, or new ways to extend the project. The key questions we ask ourselves when evaluating a new project are: Is the project seeing increases in contributors, commits, pull requests, and stars? Has the community rallied beyond this project amongst the many variants or flavors attempting to solve a similar problem? Is this project introducing a compelling new approach to solving a thorny and wide- felt problem? Have the project creators put forward an assertive and opinionated view on the direction of the project? Have they also been inclusive in ensuring the community has a voice and say? How would developers feel if the project was no longer available?

2. Build Around Project-Market Fit

Next is understanding how your project is being used inside of companies: how are developers using your project? What are the killer use cases? Where are they struggling to use the project? From there, you can decide whether building an enterprise offering around the project makes sense. For instance, is it a run-time service that companies struggle to host where a managed solution would see strong adoption? Are developers building homegrown solutions to make it work or stitch it together internally? Might customers need enterprise-level security, support, or performance in order to deploy into a production environment? Could the value of an enterprise solution wrapped around the open core eventually multiply if coupled with capabilities such as logging, analytics, monitoring, high availability, horizontal scaling, connectors, security, etc. Understanding how the project is being used and where there might be value-add for enterprise customers is key before embarking on building an enterprise service.

3. Start With a Loose Open Core

The goal in going open source to enterprise is to see widespread distribution and adoption of a project by a large community of developers which can eventually turn into a healthy cocoon of demand for an enterprise offering. To do so, it’s best to avoid dogmatic decisions in the early stages of going pure open or what/how will be closed. Rather, focus on keeping a loose open core; keeping the core open for the life of the project and building an enterprise offering as source-available and closed source capabilities that magnify the value of the core when being deployed into complex environments or use cases. Over time you can decide to graduate source available and closed source capabilities into the open core — more about that in an upcoming post. Keeping a loose open core allows the flexibility to continue to build and grow the community while offering specific SaaS or deployment models that meet the needs of the commercial market, and hopefully keep both constituencies satisfied.

4. Pick the Right License

Your project license structure is key to get right from the start; a permissive software license (MIT, BSD, Apache) allows for mass adoption by unburdening users from needing to contribute back. Protective licenses (GNU GPL) force forks/derivatives to release their contributions back as open source. Then there are variants such as the LGPLAGPL, and Apache 2.0 with Commons Clause that are mostly permissive but have specific limits if you’re concerned about cloud providers or others freeloading on your project into a managed service. Thinking through the competitive risks, such as what groups forking your project might be able to do, or if the cloud providers could fork a managed service of your project, are critical to designing the right license structure. See, in example, the Redis Labs post on changing from AGPL to Apache 2.0 with Commons Clause and why.

5. Define Clear Principles for Open vs Closed

Constructing the right open core vs source available vs closed source strategy could be a company-making or killing decision. Source available and closed core need to be thought of as value-adds that wrap around open core, with many cases, in a path to eventually graduating into open core. Be explicit about the principles you use to decide what to open vs close, and how/when/if they graduate. A guiding principle for what to make part of open core vs closed might be (a) the closed enterprise/commercial edition only focuses on the needs of enterprise segment, or (b) the needs of companies that are post-revenue, or © based on use cases that exceed certain scale/performance requirements. Be explicit about it, write it down, share it with your community. The selected guiding principle then dictates when to release to the open core vs keeping closed The community often will understand that a strong commercial business is required for continued investment into the project, as long as you are explicit about the intentions and roadmap to continue supporting the community. These transparent principles will often avoid many of the conflicts between the community and commercial needs, i.e. the community pushing for a feature that competes with the enterprise offering.

6. Maintain Project Leadership

Even as the project creators, maintaining project leadership is key and is not guaranteed. This means striking the right balance between supporting and leading the community, and being explicit with the direction of the project, yet engaging deeply with the community. Taking an active role in the PMC if part of the Apache community, lead the AIPs, know the top 50 contributors intimately, and drive the direction of the project. Be responsible for the majority of the new commits and releases for the project. Ensure there is always a reason for new contributors to join and for the community to continue growing.

Next week, we’ll talk about focusing your commercial and OSS roadmaps on developer adoption.

*Venrock is an investor in Astronomer, the enterprise developers of Apache Airflow.


Building Commercial Open Source Software: Part 1 — Community & Market Fit was originally published on Medium.

Source: https://medium.com/@ethanjb

My First Week Investing at Venrock

Dear Founder,

Today begins my first week at Venrock. I’m excited and very humbled to join this long-standing team of whip-smart, hardworking investors in supporting you, the entrepreneur. Drawing on my background, I’ll be investing in consumer, commerce enablement, and SMB services & tools. I imagine my focus may evolve over time. However, there’s one thing that will remain consistent – my commitment in service to you, the founder.

As the new kid on the investing block, I wanted to share three snapshots that give you some color on me. In 2005, Steve Jobs gave my commencement speech at Stanford. One phrase in particular lingered with me over the years: “You can’t connect the dots looking forward; you can only connect them looking backwards.” So here are three snippets into what’s shaped my perspective of our relationship.

🤝 The Goldman Sachs Years – The Client Always Comes First. That Means You.

Before my transition into tech, I spent eight years at Goldman. It was my first job out of college, and I stayed because I loved my clients and I loved New York City. I covered retail and consumer brands, supporting founders and CEOs through financings, acquisitions, and IPOs.

My first day at Goldman, I was given a laminated sheet of the firm principles. It was the Goldman version of the ten commandments. I forgot most of them, except the first which was in big, bold letters. Principle #1: The client always comes first.

Banking is a services business. Your role is to support your client. And here’s the thing, venture capital is no different. As the founder, you’re my client. It’s my job to earn your trust. This means consistently showing up, listening more than speaking, and supporting you as your company evolves. And if that means flying to Chicago on my birthday in the midst of a midwestern winter, I will do it!

🎢 The Pinterest Years – Startups Are a Roller Coaster. Lean on Each Other.

In 2012, I got the sense that something was happening out west. And I knew where I wanted to be: Pinterest. As someone who loved getting lost on the streets of New York, I was looking for the same serenditipous experience online. The moment I saw Pinterest, I knew this was it.

It took over a year of pitching (aka begging) before I was hired to help build their monetization engine from the ground up. Going from banking, where consistency is the name of the game, to startups, where each day unfolded differently was equal parts exhilarating and scary as hell.

We all know that startups are an emotional roller coaster ride. I can only imagine how amplified this is for the founder. What got me through the lows were two things – my merry band of coworkers and our unequivocal faith in Pinterest’s success. And that’s the same perspective I bring to my relationship with you. The lows feel more overcomable when you’re surrounded by those who believe in you.

👤 The Facebook Years – The Next Zuck Looks Like You.

In my roles at Pinterest and Goldman Sachs, I mainly worked with larger companies. So when I heard that Facebook was increasing its focus on SMBs, I jumped at the chance to lead its long tail advertising business. I met small business owners from around the world. And it opened my eyes in a big way to the diversity and ingenuity of entrepreneurs from all walks of life.

There is no “central casting” for the small business owner. This allows for a greater expression for what a successful business owner looks and thinks like. There’s the 24-year-old woman in rural Indonesia who runs a multi-million dollar batik business. There’s the 55-year-old army veteran in Louisiana who’s cornered his local towing market. I saw this diversity of backgrounds in a visceral way in working with SMBs. Quite honestly, I wouldn’t have grasped it otherwise.

The Internet has expanded access in a way that didn’t exist before. This means world-changing businesses can emerge from any number of communities. I strongly believe that the next Facebook will be built by someone who looks and thinks like you. And it’s my job to find and partner with you.

🙋🏻‍♀️ I Look Forward to Meeting You.

It takes a special person to have the tenacity and optimism to build a world-changing company – to bring the art of the possible to the realm of reality. In times like these, we need you more than ever. I look forward to meeting you, supporting you, and riding the highs and lows of the startup roller coaster with you in the weeks, months, and years to come.


My First Week Investing at Venrock was originally published on Notion.

Source: https://www.notion.so/My-First-Week-Investing-at-Venrock-de652714845340c2874793e16bbbc890

Venrock Adds Two New Investors: Mariana Mihalusova and Julie Park

Much has changed during the past six months, but our search for great talent hasn’t stopped. We are excited to welcome two Vice Presidents to the firm, continuing our effort to help build great companies across healthcare and technology. 

Mariana joins the healthcare team with experience across the entire drug development life cycle. Prior to Venrock, she was Executive Director at Celgene, where she led a broad range of preclinical and clinical stage drug programs through early human studies.  She graduated from Harvard with an MBA and Ph.D. in biochemistry after earning her bachelor’s at Brown University. Her focus will be on early stage biotech companies and she was instrumental in Venrock’s recent investment in a stealth oncology antibody drug conjugate company. 

Julie joins our technology team and will focus on investments in consumer, commerce enablement, and SMB tools & services. Most recently, she was an executive at Facebook, where she helped SMBs grow as Director of the global long tail ads business. Previously, she was on the founding product and sales teams at Pinterest. Before moving to the west coast, Julie was a Vice President at Goldman Sachs, where she worked closely with consumer and retail companies. Julie has dual degrees from Stanford, with a Master of Science from the School of Engineering.  

Both Mariana and Julie will be based in our Palo Alto office upon reopening.

Eight Trends Accelerating The Age Of Commercial-Ready Quantum Computing

Every major technology breakthrough of our era has gone through a similar cycle in pursuit of turning fiction to reality.

It starts in the stages of scientific discovery, a pursuit of principle against a theory, a recursive process of hypothesis-experiment. Success of the proof of principle stage graduates to becoming a tractable engineering problem, where the path to getting to a systemized, reproducible, predictable system is generally known and de-risked. Lastly, once successfully engineered to the performance requirements, focus shifts to repeatable manufacturing and scale, simplifying designs for production.

Since theorized by Richard Feynman and Yuri Manin, quantum computing has been thought to be in a perpetual state of scientific discovery. Occasionally reaching proof of principle on a particular architecture or approach, but never able to overcome the engineering challenges to move forward.

That’s until now. In the last 12 months, we have seen several meaningful breakthroughs from academia, venture-backed companies, and industry that looks to have broken through the remaining challenges along the scientific discovery curve. Moving quantum computing from science fiction that has always been “five to seven years away,” to a tractable engineering problem, ready to solve meaningful problems in the real world.

Companies such as Atom Computing* leveraging neutral atoms for wireless qubit control, Honeywell’s trapped ions approach, and Google’s superconducting metals, have demonstrated first-ever results, setting the stage for the first commercial generation of working quantum computers.

While early and noisy, these systems, even at just 40-80 error-corrected qubit range, may be able to deliver capabilities that surpass those of classical computers. Accelerating our ability to perform better in areas such as thermodynamic predictions, chemical reactions, resource optimizations and financial predictions.

As a number of key technology and ecosystem breakthroughs begin to converge, the next 12-18 months will be nothing short of a watershed moment for quantum computing.

Here are eight emerging trends and predictions that will accelerate quantum computing readiness for the commercial market in 2021 and beyond:

1. Dark horses of QC emerge: 2020 will be the year of dark horses in the QC race. These new entrants will demonstrate dominant architectures with 100-200 individually controlled and maintained qubits, at 99.9% fidelities, with millisecond to seconds coherence times that represent 2x -3x improved qubit power, fidelity and coherence times. These dark horses, many venture-backed, will finally prove that resources and capital are not sole catalysts for a technological breakthrough in quantum computing.

2. Hybrid classical-quantum applications will power the first wave of commercial applications. Using quantum systems, we can natively simulate quantum theory or elements of nature, such as the characteristics of electrons, and thus molecules and their behaviors. Hybrid systems can rely on early quantum systems surpassing what is possible on a classical computer: Taking advantage of their limited and specialized capabilities while passing the computed variables back to the classic system for completed computation. We’ve already seen this emerge for chemistry-related research across materials engineering and pharma.

3. Early consolidation: We will start to see early consolidation in quantum hardware companies as conglomerates realize they need to abandon, bolster and/or diversify their current architectural approaches. Companies that don’t have existing investments in quantum will need to acquire their way in to gain access. A number of architectural methods won’t work as well as anticipated (seeMicrosoft’s elusive particle). As we saw with the hard drive disk and semiconductor industries consolidation, those that have proven early technology successes, indicating an approach may become dominant, will be the first to be subsumed.

4. The “quantum software developer” generation emerges thanks to various layers of the quantum stack beginning to become accessible to developers:

  • Access to quantum hardware thanks to cloud providers such as Google, Microsoft and Amazon deploying new managed services.
  • Access to software frameworks thanks to various quantum developer kits released and open-sourced (Microsoft, Google, Baidu, IBM).
  • Access to applications thanks to companies such asZapata, QCWare and Cambridge Quantum, building quantum-ready applications and simulations across chemistry, finance, logistics, pharma and more that position companies to be ready to leverage new quantum hardware technologies as they become available.

5. Venture capital investments into QC hardware companies will stage inverse, focusing on late-stage, proven technologies, slowing down venture investments into new seed and Series A QC hardware companies. Most of the venture capital firms who go deep into new forms of computing have made their early-stage QC hardware bets, leaving few firms to target. At the same time, there will likely be an increase in venture investment into later stage (Series B and on) QC hardware companies as a result of material technical de-risking, end-to-end algorithmic computation, and a path to error correction and scale. As we saw with the semiconductor industry, we will see mainstream venture funds double down on dominant technical approaches.

6. A surge in commercial and government funding for QC companies thanks to a number of tailwinds:

  • More companies are starting to invest in being “quantum ready.” This ranges from internal training to build more profound awareness of the power of QC, to building quantum-ready applications and simulations for high-value problems, spending upward of $500,000- $1 million per application use case or algorithm.
  • An increasing number of companies are actively paying for access to early quantum hardware in order to build ahead of the curve, even if those systems aren’t capable of accurate or complete computations yet.
  • The National Quantum Initiative Act has earmarked $1.2 billion for quantum research. While these funds over 10 years will trickle mostly through university research programs, it will lead to a number of new spin-outs and shared research across the quantum computing ecosystem.
  • Various legislative draft proposals have been in the works for a “National Science and Technology Foundation,” replacing the NSF, that would spend $100 billion over five years on research related to technologies such as artificial intelligence, quantum computing and 5G telecommunications.
  • Our national security and defense priorities are beginning to crystalize around quantum computing use cases, mirroring many of the “quantum ready” intentions and use cases of enterprises as mentioned above, leading to new SBIR and OTA contract awards.

7. Geopolitics is going to push quantum computing into the mainstream. The intensifying competition from China, Canada, Australia, Finland and others, will introduce new existential risks around encryption and technological dominance. What if an adversary suddenly gained access to a computing power advantage? Similar to the politicizing of 5G and AI, pushing quantum computing into the national spotlight will increase pressure on our federal government to accelerate U.S.-based quantum leadership.

8. Post-quantum encryption will become a top priority for every CISO. Even if we are 10 to 15 years away from enough error-corrected qubits to break public-key encryption, this isn’t a zero-sum problem. Encryption lives in shades of grey, impacted by regional policies, encryption standards, depreciated or legacy installations, and more. For example, there are still over 25 million publicly visible sites relying onSHA-1, a cryptographic standard deprecated in 2011. While the most advanced encryption protocols are likely safe from the next few generations of quantum computers, the volume of deprecated yet active encryption protocols across the web is rampant and can go from a small nuisance to major problem overnight. NIST is leading the charge on a post-quantum cryptographic standard to be approved by 2024, in hopes of being fully deployed before 2030. In the meantime, best to upgrade to the latest protocols.

The beginning of every renaissance or golden age in history has started as a result of the intersections of capability, community, access and motivation. Quantum computing is entering the beginning stages of that age.

Technology breakthroughs are demonstrating stable, durable qubits that can be controlled and scaled. Underlying technologies such as arbitrary waveform generators, software defined radios and rapid FPGA development have accelerated the speed of development. New entrants are proving new methods and architectures as far superior to those of the past. An ecosystem is developing supporting the applications, distribution and funding to enable access to these systems. The industry is seeing firsthand the power and capability of a quantum system, racing to be the first in line to get their hands on it.

Quantum computing will represent the most fundamental acceleration in computing power that we have ever encountered, leaving Moore’s law in the dust. Welcome to the quantum age.

*Venrock is an investor in Atom Computing.


Eight Trends Accelerating The Age Of Commercial-Ready Quantum Computing was originally published on TechCrunch.

Source: https://techcrunch.com/author/ethan-batraski/

Here Comes Digital Sports Collectibles*

*on a blockchain!

Dapper Labs, in partnership with the NBA and the NBPA has launched the beta of their NBA TopShot crypto-collectible game. The first part of the experience allows consumers to buy packs of compelling NBA game moments. Users buy packs and try to complete various collections, build their teams and showcases, and start trading with others!

Image for post

The stats from the beta are fantastic and show consumers are gobbling up these collectibles like many of us do in other collectible categories:

  • The first 900 users have bought more than $1.2M of NBA packs in the last 5 weeks (>$1300 ARPU)
  • We have sold out of 96% of available packs thus far (22,000+ packs), often times within 3 minutes of each drop (pack pricing varies from $22–$250 each)
  • 51% of payment transactions conducted with with credit card, the remaining with crypto, demonstrating some nice traction from non-crypto users

The team has done a fantastic job “hiding” the complexities of wallets, crypto tokens, blockchains and account security. To users, this feels like a simple ecommerce or in-app purchase experience. After a pack purchase, the user watches an exciting unveil of their Moments:

My collection is pretty weak so far, but my son has collected some great ones:

Image for post

Surprisingly to many, this entire experience is a decentralized app built atop a layer one smart-contracts blockchain called Flow. Flow was designed and built by Dapper to handle mainstream and highly-scaleable games and other consumer experiences. Each moment you buy or trade is actually a non-fungible cryptotoken (NFT) sitting atop a smart contract. This enables true ownership in perpetuity, verifiable provenance, and allows other developers to build games in which your moments can be used.

The experience opens to the public (and the other 13,000+ people on the waiting list) sometime this fall, but, in the meantime, you can sign up here to jump the line.

In addition, Dapper is announcing the completion of an additional $12M financing to help scale Flow and TopShot, with participation by some notable NBA players like Andre Iguodala, Spencer Dinwiddie, Aaron Gordon, Javale McGee.

Back in November, 2018, I detailed our investment thesis in crypto collectibles and why I thought they might usher in a wave of consumer blockchain usage. We are about to see whether this view is right or not, but the early data looks super-promising…


Here Comes Digital Sports Collectibles* was originally published on Medium.

Source: https://pakman.com/

Building A Responsive Future For Space Launch, Venrock’s Investment Into ABL Space

The backstory of our privileged investment into ABL Space

Beginning of the new space economy

We are at the beginning of a new era in the Space economy. There are over 20k satellites filed with the FCC for expected launch within the next 5–8 years. Global Space activity was estimated to be $360 billion in 2018, with commercial space revenues representing 79% of total space activity, expecting to double over the next five years.

Over 90% of the satellites filed to launch are in the ‘small satellite’ (150kg — 750kg) class. Fueled by demand for global broadband, space-based observation & tracking, increased NASA activity, and more, the need for lower-cost capabilities and faster times to orbit has become paramount. This is a radical shift from the large, expensive, and time-consuming builds of GEO based satellites. Our investment in Astranis was an example of this.

In parallel, US defense priorities have shifted away from large satellite clusters in an attempt to ‘disaggregate’ existing space infrastructure into small satellite constellations to address emerging threats and create increased resilience. Large satellites are challenging to defend, while Russia and China have demonstrated ground and in-orbit anti-satellite weapons. Increasingly our space network resiliency will be reliant on our ability to respond, in ways that demand the launch of new satellites from austere locations within 24–48 hours to limit operational interruption

Existing launch capacity

The emerging demand for small satellites, today, doesn’t mesh well with established launch system. Previously, launch vehicles were designed for a capacity between 8,000kg to 25,000kg. Small Sat operators simply couldn’t afford to be primary payloads and were left either as secondary payloads on a launch that wasn’t a good fit for their satellite orbit — costing the satellite precious fuel — or waiting months for a launch that would be more direct. The lack of options creates a bottleneck for next generation small satellite operators.

This gap created an opportunity for new companies to offer launch capabilities specifically to the small satellite market’s needs. To provide a reliable and affordable solution, tailored to this spacecraft class, launching into the required orbit, at the time required.

A challenged small launch market

In the past few years projects emerged aimed at addressing this growing need. The conventional wisdom amongst investors went as far as saying the small satellite launch market was oversaturated, with over a hundred projects under development.

In reality, only a handful of these efforts were serious, with any meaningful staffing, capital raised, or technical progress to show for. Among them, the technology and business models were highly variable, requiring hundreds of millions of dollars in capital to get to the first launch. With this information we believed the market opportunity was still available for outstanding companies.

We searched for a company going after the right payload/market, that could stand up capabilities responsively, with a technical advantage, and at a low program cost. But we saw several challenges in the existing approaches across one or more of the following dimensions:

  1. Technical risk: the architectural approach was ‘too innovative,’ lacked engineering tractability, reinvented the propulsion or manufacturing techniques creating too many unknowns, or designed for a derivative platform.
  2. Unit Cost risk: The BOM of the rocket was too expensive to generate any meaningful margins, and would require too many launches per year to break even.
  3. Market mismatch: They designed the launch vehicle to go after the nano and cubesat segment of the market (100kg — 150kg), whereas the market centered around the small class (150kg — 750kg).
  4. Capital risk: The capital requirements of getting to the pad would cost well over $100M, requiring too much capital to get to the first launch and creating capital overhang for early investors to see any meaningful multiple.
  5. Team risk: The teams went about designing architectures that lacked the concepts of being lean, modular, and iterative in their designs, costs, and execution.

Investing in the small launch market

This industry risk assessment helped us hone an investment thesis that probed what truly mattered in building an enduring launch business:

1. Are they going after the right market segment with a differentiated price/capacity and responsiveness that will attract significant customer demand? (product/market fit: capacity, price, responsiveness)

2. Are the architecture and technical approach meaningfully differentiated, and can be executed with higher success, at a manufacturing cadence that they can sustain an advantage in the market? (tech advantage: technical risk, manufacturing risk, differentiation)

3 Can this be executed at a lower total program cost per launch unit economics that looks more like software margins than classic aerospace? (capital advantage: total program cost, unit economics)

4. Does this team have the ‘special sauce’ of deep technical capabilities, strong commercial instincts, and can successfully raise capital as needed. (team: execution, commercialization, capital)

Until that point, we hadn’t found a company that demonstrated the product/market fit, technical advantage, economics, and team to pull it off — proving that reaching space can be simple, efficient, and routine.

Introducing ABL Space

Harry and Dan started ABL Space with the belief that you could build a simpler, lower cost, and self-contained system, with far less capital than launch companies were spending today. Building on years at SpaceX, they saw that launching small satellites can be made flexible, reliable, and more affordable than ever.

Image for post
Harry & Dan, co-founders at ABL Space

At SpaceX, Harry led the grid fin reentry steering system development effort, was on console for many Falcon 9 launches (including the first successful landing in 2015) and managed multiple large production teams. Dan, previously a quant trader and data scientist working across public markets and venture capital, had a unique lens on how to structure and position a launch company. They formed a powerful duo moving fluidly between technical and commercial domains, identifying opportunities, and building technical teams to pursue them. Their secret weapon was their friendship. Harry and Dan met as freshmen at MIT and developed the deep trust that can only be formed through the ups and downs of thirteen years of career discussions, relationship advice, ski mountaineering and late nights. It’s this trust that lets them move swiftly building different parts of the business, yet seamless across domains and trade responsibilities, building a company that feels a lot more like software and engineering than rockets and science.

Using those principles, ABL Space set out to build a launch platform with a philosophically different approach. First, they started the design with the constraints of having the lowest possible unit costs, the highest manufacturability, and the least technical risk. The result is a simple, reliable rocket that can be built in 30 days — far faster and more reliable than more exotic architectures. Second, they operate more like a software company using lean engineering principles and modular designs, focusing on deployability and operations as part of the design requirements.

They believed you should be able to launch rapidly (in as little as 30 minutes after call up) and from anywhere, setting a new standard for resilience. Using proven design and engineering principles, focusing on building a long-lasting business, a reliable, affordable, and on-time launch is achievable.

While new launch companies clamored into the cube/nano/microsat market, focusing on 200kg and below, ABLlooked beyond, seeing the accelerating growth of the smallsat (500kg — 1200kg) market. They positioned themselves as one of the only launch systems built for the incoming market demand.

As a result, the team designed a ruggedized, low-cost, easy to deploy launch system, with a 1,200kg capacity, called the RS1, as an end-to-end product, rather than just a launch service. Able to ship as an entire system, called the GS0, designed to rapidly launch from austere locations using a containerized system (including launch vehicle, propellent, mission control, and payload) to be set up within 24 hours on any 100 x 50′ flat concrete pad. With unit economics that looks a lot more like software than rockets.

Both a late entrant and first mover, ABL progressed at a pace we had never seen before. They only needed nine months from engine program concept to hotfire, and then only nine months to stage test. While others spent hundreds of millions trying to solve the same problems, ABL maneuvered three times faster than any comparable effort, with almost no wasted effort, capital, or hardware.

Image for post
Integrated RS1 Stage 2 test campaign

Built and led by a very senior and respected team of SpaceX engineers, who fundamentally thought more like rocket engineers than rocket scientists. With the ‘special sauce’ of deep technical capabilities, commercial instincts, and how to bring it all together. They were focusing on delivering real value to their customers rather than on press and marketing.

This is just the start, as they extend the same radically different engineering culture to other major technology areas in the future.

As a result, we were fortunate to have been able to invest into ABLs seed round with our friends at Lockheed Martin in 2019, and earlier this year, we at Venrock, had the privilege to lead ABLs Series A and join the board, alongside Lockheed Martin, New Science, and Lynett Capital.

A quick update since we first invested

Since we first invested, ABL was awarded DoD contracts totaling $44.5mm over three years and executed a three-year CRADA with the AFRL Rocket Propulsion Division to test their rockets at Edwards Air Force Base. They activated an integrated RS1 Stage 2 test campaign and are exceeding engine performance targets. They’ve grown their team to over 80 world-class engineers and operators and expanded into full-scale production facilities in El Segundo and Mojave, all by the third anniversary of the company’s founding.

We couldn’t be more excited to partner with Harry, Dan, and the rest of the ABL team. Ad Astra!


Building a responsive future for space launch, Venrock’s investment into ABL Space was originally published on Medium.

Source: https://medium.com/@ethanjb

Using The Attack Cycle To Up Your Security Game

Like the universe, the attack surface is always expanding. Here’s how to keep up and even get ahead.

Most criminal activity is designed to elicit a payoff for the perpetrator, and crime on the Internet is no different. As new surfaces emerge, previous attacks are reconstituted and applied. Cybersecurity tends to follow a cycle, once you know when and what to look for. To (poorly) paraphrase Bob Dylan: You don’t need a weatherman to know which way the wind blows. You just need the experience of being around for a few of these cycles.

The New-New Thing
When we think about cybersecurity threats and associated mitigations, there are three key factors to consider:

  • Attack Surface: The thing that an attacker attempts to compromise, such as a laptop, smartphone, or cloud compute instance.
  • Attack Sophistication: The methods and attack types, including persistence, zero-days, phishing, and spear phishing.
  • Threat Actors: Who the attackers are and their implied motivations, like nation-states seeking intellectual property or organized crime engaged in ransomware.

The attack surface is like the universe: in a perpetual state of expansion. While your laptop is (hopefully) running a recent operating system version with (kind of) timely patches, there’s a good chance that your bank’s ATMs are running Windows XP. But after Microsoft retired XP support in 2014, 95% of ATMs were still running the operating system. That number hadn’t improved much four years later and hackers were gleefully demonstrating these machines spewing cash. This means an IT security team must live in the past and the future.

A solution to a modern problem can introduce a new set of challenges: a new console to learn and new alerts to integrate. However, this presents an excellent, and often necessary, opportunity to repurpose existing budgeted spending. Examples of this include the erosion of traditional antivirus by endpoint detection and response (ERD) and the move from physical Web application firewalls (WAF) to software-based NG-WAFs.

Attack sophistication is directly proportional to the goals of the attackers and the defensive posture of the target. A ransomware ring will target the least-well-defended and the most likely to pay (ironically, cyber insurance can create a perverse incentive in some situations.) because there is an opportunity cost and return on investment calculation for every attack. A nation-state actor seeking breakthrough biotech intellectual property will be patient and well-capitalized, developing new zero-day exploits as they launch a concerted effort to penetrate a network’s secrets. 

One of the most famous of these attacks, Stuxnet, exploited vulnerabilities in SCADA systems to cripple Iran’s nuclear program. The attack was thought to have penetrated the air gap network via infected USB thumb drives. As awareness of these complex, multi-stage attacks has risen, startups have increased innovation – such as the behavior analytics space where complex machine-learning algorithms determine “normal” behaviors and look for that one bad actor.

Threat actors are the individuals and organizations engaged in the actual attack. In the broadest sense of the term, they are not always malicious. I have seen companies hobbled by an adverse audit finding or a compliance lapse. When I was early in the data loss prevention (DLP) market, solutions were sold to detect insider threats stealing intellectual property. This was (and still is) a hard use case to sell against, and it wasn’t until regulations and legislation emerged that required companies to notify if they’d been breached and lost personally identifiable information that the DLP market became a must-have security solution.

It is possible for solutions to advance independently of new threats, actors, or surfaces, frequently when there is a breakthrough in underlying computational capabilities. Examples of this include the use of machine learning to identify file content in order to prevent data loss without rigid rulesets or machine vision to read text from an image-based spear-phishing attack. 

It’s All Been Done
In my experience, a new market for cybersecurity solutions is triggered by an expansion of the attack surface. This could be something as seismic as AWS or the iPhone, or as localized as a code framework like Struts or React. With a new attack surfacecomes new management requirements and new attackers, exploiting vulnerabilities and flaws in human interactions. The ensuing data, financial, and reputational losses cause new cybersecurity solutions to emerge.

Typically these solutions will also improve on previous generations, whose limitations become obvious when deployed on a new attack surface. Examples are plentiful. IT system compliance and vulnerability management was confined to inside the enterprise, scanning with agents and crawlers (Qualys, Tenable). With the emergence of public cloud, startups (such as Evident.io and Lacework) appeared to scan for vulnerabilities through native APIs provided by cloud environments.

There are typically four ways to handle risk. You can accept, mitigate, transfer, or avoid it.Brought to you by Reciprocity, Inc

For its part, antivirus started as relatively simple signature-based protection; if an agent detects a specific executable or behavior, prevent it. But as the attack sophistication increased, next-generation endpoint protection emerged with specialization for file-less attacks, in-memory exploits, etc.

Data loss prevention began with simple detection of structured content (Social Security numbers, credit card numbers) in email, Web posts, and end-user devices. There is now a new breed of vendors focused on data leakage from cloud-based services (outside the enterprise datacenter) such as Slack, Box, and Github, offerings that didn’t exist when the previous generation of solutions came to market.

The Next Thing
Security practitioners must consider the cybersecurity requirements when new surfaces are deployed or business models change. They should ask four questions to help clarify risks:

  • Has your business changed in a way that increases the likelihood that you will be attacked and/or the attacker sophistication will change?
  • What baseline data have you historically collected, and how will you get the same information from this new surface?
  • What of value is contained with, or generated by, this new surface?
  • How could this new surface be exploited and defended, and does it impact existing surfaces?

The initial question should be asked on a routine basis. COVID-19 changed attacker interest for many small biotech companies in a way their security posture did not anticipate, resulting in an uptick in attacks by nation states seeking an edge in new treatments and a potential vaccine. The second question is often the one that solution providers initially race to address because it is the most obvious. If there’s an enterprise compliance policy requiring potential vulnerabilities to be remediated, security organizations still must identify those vulnerabilities regardless of where the underlying system is running. 

The third question is often the one that gets forgotten. Many data breaches have occurred because a new surface is deployed and it gives attackers an expanded attack surface that allows for access to an existing, previously “secured” platform. The Target breach is the most well-known example of this, but countless other breaches have happened because of something as trivial as a mis-configured network setting on an Amazon Web Service’s Virtual Private Cloud.

The recent, near-universal move to remote work will no doubt result in new attacks against home networking infrastructure. It’s important to remember that attackers are not interested in doing more work than is necessary (the ROI calculation), the attack surface will shift the “weakest link” to exploit. Asking these questions and anticipating possible vulnerabilities is critical to getting ahead of the next ransomware attack or zero day-driven intellectual property robbery.


Using the Attack Cycle to Up Your Security Game was originally published on Dark Reading.

Source: https://www.darkreading.com/