Category Archives: Insights

Will The Readmission Rate Penalties Drive Hospital Behavior Changes?

This post first appeared in the Health Affairs Blog.

Since the development of the metric in 1984 by Anderson and Steinberg, inpatient hospital readmission rates have been used as a marker for hospital quality.  A good deal of attention is now being paid to the new readmission rate penalties in the Affordable Care Act (ACA).

While the penalties have garnered significant attention, it is unknown whether they will materially change hospital behavior.  In this post, after reviewing the mechanics of the penalties, we take a close look at how they are likely to affect hospital incentives.  We also suggest some refinements to the penalties that could help achieve the aim of reducing preventable readmissions.

How The Penalties Work

The readmission penalty in the ACA is based on readmissions for three conditions: Acute Myocardial Infarction (AMI), Heart Failure, and Community Acquired Pneumonia.  For each hospital, the Centers for Medicare and Medicaid Services (CMS) calculates the risk-adjusted actual and expected readmission rates for each of these conditions.  Risk-adjustment variables include demographic, disease-specific, and comorbidity factors.  The excess readmission ratio is the actual rate divided by the expected rate.

Simplifying a little, the aggregate payments for excess readmissions is summed for all three conditions over the past three years, then divided by total base operating DRG payments for the past three years to calculate the penalty percentage.  Base operating DRG payments are IPPS payment less DSH, IME and outliers except for new technology.  The total penalty is the penalty percentage times total base operating DRG payments for that fiscal year, provided that the total amount does not exceed 1 percent of base operating DRG payments in fiscal year 2013, a cap that increases to 3 percent in fiscal year 2015.

According to calculations by CMS, the overall penalty in 2013 is likely to be 0.3 percent of inpatient reimbursements, or $280 million, with 8.8 percent of hospitals receiving the maximum penalty.

Changes in hospital performance are predicated on these values being significant for hospitals.  The natural benchmark to the penalties that providers face is the amount that hospitals earn from potentially avoidable readmissions.  Readmissions in 2010 were estimated by CMS to cost Medicare $17.5 billion.  Studies suggest the avoidable portion of total readmissions ranges from 5 percent to 79 percent, with the median being 27.1 percent.

Multiplying the excess readmission rate by the average hospital inpatient EBITDA (earnings before interest, taxes, depreciation, and amortization) margin of 2 percent suggests that the current profit from readmissions above the avoidable portion is about $95 million.  Thus, the penalties appear to be greater than the profits.  This will be increasingly true in the future as Medicare moves to more bundled payments for acute care or global payments on a patient basis.  In such a payment system, there will be no profits for readmission, so the only financial consequence for the hospital will be the penalty.

Of course, behavior change is not binary — changes will happen along a spectrum.  Consider the fixed and variable costs for a hospital to address readmission rates.  There are many fixed costs in reducing readmissions, including the need for electronic medical records (EMR), adding non-fungible labor for case management and discharge planning, and training employees in discharge planning.  If the penalty on the hospital is high enough, the fixed cost inertia can be overcome.  Related, legislation is helping to reduce the size of these fixed costs by incentivizing investment in EMRs.

For a hospital that has overcome the fixed costs, there are also variable costs necessary for reducing readmissions.  These may include more nursing time per admission, additional supplies and post-discharge medications, and care coordination costs.  Variable costs are potentially offset if beds are filled by another admission that generates higher margins.

Refining The Penalties

The interplay of the fixed and variable cost factors suggests that movement for hospitals will not start until the fixed cost is overcome, but once that threshold is surpassed, those readmissions most easily prevented will be taken out of the system.  Rapid downward movement will then slow, as the net benefit shrinks for each patient.  These types of costs suggest a policy modification of the economic incentives in the future: a sliding scale based on the number of readmissions to make each additional readmission increasingly more expensive.

Another important component of the readmission calculation is risk adjustment.  Currently, the risk adjustment in the CMS penalty program includes only two demographic factors: age and gender.  Butstudies suggest that higher readmission rates are linked with race and location.  If we assume that lower-income populations face higher readmissions for reasons that are harder to prevent, the penalty will disproportionally affect hospitals that cannot realistically match their expected readmission rate.

To estimate the magnitude of this effect, we assumed that the readmission rate difference of 3.5 percentage points between minority and non-minority hospitals was purely structural.  If a hospital which would otherwise be at the national average faces this type of difference, the calculated penalty would be 18 percent of revenue for those admissions.  This would be inequitable and likely ineffective in reducing readmission rates, since it is targeting those readmissions that are unavoidable.

The extent to which hospitals reduce readmissions will depend on other factors as well.  The organizational structure of the hospital — for example, whether it is part of an Accountable Care Organization (ACO) and who leads the organization — could matter a great deal.  The fixed-cost base is much higher for hospital-owned ACOs than for physician-owned ACOs; parts of a hospital cannot be shut down, even if the bed is not filled.  For this reason, hospital-owned ACOs may be less focused on readmission rate reductions in the short run, but more focused over time as they make capital allocation decisions.

Public perception, through increased transparency by efforts like Hospital Compare, may also push hospitals to change behavior.  Studies show that hospitals have been shamed into improving mortality rates when they become measured.

In all, the new readmission penalty holds a good deal of promise.  But to maximize benefits to patients, cost savings, and rate of improvement, future refinements are needed to better align incentives and adjust for more patient characteristics.


Connected Security

While at CES, I noticed a large amount of connected TVs and connected cars, which led me to think that these devices are ripe for breach from hackers, so I went digging and low and behold thieves in the UK have begun the looting process ( Thus I felt it was a topic worth writing about.  Here are my 2 cents.

Why Does Connected TV Security Matter?

Cyber attacks to Android and iOS powered cell phones and computers have increased dramatically over the past five years.  Now that televisions are becoming “smarter”  by being powered through these platforms, the attacks have become more sophisticated. 

Increasingly, televisions are starting to incorporate smart operating systems which enable them to run wifi.  From here, criminals are able to hack into the system through an app.  Cyber criminals have already discovered a flaw in some Samsung smart TVs, which allows them to listen in and look into households through the television.

Do Cyber Criminals Really Care About Connected Cars?

Cyber criminals care about any smart device that allows them to gain access to your personal details.  They no longer just have an interest in stealing your login details to social media networks or your bank account information; cyber criminals are now interested in controlling your car, as well. 

Connected cars have wifi connectivity which enables the driver to access GPS, email addresses, and stream movies.  Some connected cars offer security features through connective devices which include the braking and door locking systems.  Cyber criminals could potentially hack into your connected car system, giving them the opportunity to take control of your engine speed, car security alarm,  wifi connectivity, door locking system, and your braking system. 

How to Make Sure You’re Safe

Manufacturers of connected vehicles have already been briefed on these threats and are working to create patches and encryptions which make it harder for potential cyber criminals to hack in. The U.S. Dept of transportation is also testing connected car devices to decrease their vulnerabilities. 

Downloading security apps and making sure your devices are password-enabled helps to decrease your risk of being hacked; however, some criminals are still able to bypass the systems. 

Covisinit, Windows, Cisco, McAfee are all devising ways to reduce the risk of cyber attacks through connected televisions and vehicles.  These measures include cloud services that restrict access to connected devices, beefed up security access, SSL encryption and authorization certificates.

Would love to hear further thoughts on the matter and what other companies out there I should be paying attention to?


How Three Federal Initiatives Are Set To Transform U.S. Health Care

This post first appeared in Forbes.

The U.S. health care system today is fraught with wasteful spending that does not contribute to better outcomes for patients. The U.S. spends more on health care than any other country in the world on a per capita basis. We spend more on healthcare alone than the entire GDP of France, however average patient outcomes in the U.S. are on par with Cuba and Slovenia, in spite of newer hospitals and more varied technologies.

Three federal initiatives — the HITECH Act, the Health Data Initiative (HDI) and the Affordable Care Act (ACA) — are designed to improve clinical quality and the patient experience, and make health care more affordable. As a result of these changes, several trends are emerging, including movement toward outcome-based payments, higher labor productivity, decreased demand for hospital-based care and better, more efficient consumer markets.

Four major trends are driving these benefits for consumers and employers:

Outcome-based payments

As we move from paper-based to digital, we are able to change what patients buy, how payors pay, and how doctors are reimbursed for care. Outcome-based payments increase the importance of care coordination, so providers will need increased technological capabilities to share data, form care teams, and perform predictive modeling to figure out which patients are at higher risk. Consumers benefit greatly from these models because a doctor’s success will be contingent on their patients doing better and spending less money on unnecessary services. It will also lead to more convenience for patients because doctors will want to see them on nights and weekends rather than send patients to the emergency room or readmit them to hospitals. Emails will also not go unanswered when it is in the doctor’s interest to make sure patients know what to do.

Higher productivity

Astonishingly, healthcare has not been able to replicate the productivity gains of the broader U.S. economy over the last twenty years. As we age and expand coverage in an era where prices cannot consistently increase faster than GDP, health care providers will need to creatively address and improve labor productivity. Almost every other sector of the U.S. economy has improved labor productivity, achieved better value and, in some cases, reduced prices. Health care providers who develop more productive ways to deliver care should improve margins, gain market share and improve the competitiveness of their businesses. This is good for patients, payors, and providers as waste is eliminated and reliability improves.

Lower demand for hospitals

Even considering the nation’s aging demographic, most hospitals will continue to have excess capacity. As reimbursement systems increasingly reward cost efficiency and reductions in readmissions and complications, many markets may become over-bedded. Additionally, alternative, less expensive treatment settings should become more common, including urgent care centers, high intensity primary care and extensivist models, and home-based care models as remote monitoring is perfected and new therapies continue to shift care from inpatient to outpatient settings. Those patients left in hospitals will increasingly be limited to the most complex patients.

Better functioning markets

Millions of newly covered health care consumers will join the ever-increasing numbers who already have high cost-sharing health plans. Even in subsidized Silver-level Exchange plans consumers will have 30% cost sharing up to their out-of-pocket maximum if their incomes are about 350% of the federal poverty level. This group will be sensitive to differences in price and value, resulting in a more engaged patient consumer base and will cause the health plan and provider marketplaces to become more competitive. Patients will have access to more data and new applications to help them shop for health care based on price, quality, and convenience; hospitals may compete on outcomes and experiences; doctors will seek to differentiate their services and likely further specialize around particular conditions and types of patients they are expert in caring for.

Health care reform will also cause the roles of health plans, hospitals and doctors to evolve in several ways:

Health plans will offer a more rewarding member experience

Health plans are already responding to these trends by creating more engaging, consumer-centric ways to get people to care about, and improve, their health, such as sponsoring wellness programs and member education; developing their own health care delivery systems which offer unique member care experiences; and offering analytic support for providers to help them better achieve population health goals cost effectively.

Hospitals will have to compete on care outcomes and total value

In a health care system where consumers have easy access to information on hospital cost, quality and patient experience, it may be difficult for hospitals to compete on factors such as the newness of the facility or the breadth of services offered and may instead need to compete on their ability to deliver superior outcomes at a better cost. We anticipate that all hospitals will want to take action to assure that their doctors adhere more often to evidence-based practices, comply with cost-effective care processes, and support efforts to reduce complications and readmissions.

There will be a race to employ doctors

Today, slightly more than half of all practicing doctors in the U.S. are employed by hospitals. Employing physicians helps hospitals by enabling them to better manage what happens to patients before and after the hospital, encourage the use of cost-effective supplies and medical devices, implement clinical pathways, and work well in team-based care models. Health plans are also beginning to employ doctors to better manage risk, enable population health management, and help create unique products and, perhaps, specialized delivery systems.

Bottom line: the ACA, the HITECH Act, and the HDI are moving the U.S. toward a health care system that can be more cost effective, more accessible, and deliver better outcomes. These initiatives, coupled with the new innovations emerging in health care IT, will help drive this change and are making it an exciting, and better, time in health care for consumers, entrepreneurs and investors alike.


Creating Outcome-Driven Health Care Markets

This post first appeared in the Health Affairs Blog.

America’s health care market does not work well.  It is inefficient, asymmetric, and in most cases not particularly competitive.  The Affordable Care Act (ACA) legislated a myriad of changes to reform and improve insurance markets with exchanges as a centerpiece.  While exchanges and reforms like subsidies, guaranteed issues, age bands, community rating, reinsurance, and risk adjustment are all helpful, a huge opportunity remains to segment the health care market around different categories of patient demand.

Basic economic theory states that a well-functioning market is aligned between supply and demand.  Ideally, suppliers and customers align around the preferences of the customers – the unit of alignment is driven by the demand side.  When we examine health care, we see demand falling into three segments:  healthy people who have episodic needs, chronic disease patients with predictable needs, and highly complex patients with less predictable needs.  Given the high variance between the three submarkets, we believe that each of these segments should be thought of as a discrete market and served by different types of insurance products, payment models, and health care providers.

We believe that this is necessary since each of these segments values providers differently.  For a healthy patient with periodic needs, convenience and experience are likely to matter more than continuity with a provider and care team.  Conversely, chronic disease patients are likely to value clinical outcome attainment, complication avoidance, and care coordination very highly.  And complex patients will need and value the customization, access to research, and specialization that the latest medical breakthroughs can deliver.  Not only are the sources of value different, but so are the delivery systems and payment models needed to align incentives for value.

Re-Envisioning The Health Care Market

Healthy patients.  For healthy patients with periodic needs, an episodic approach is also the most economically efficient.  These patients do not require the fixed cost of a large system of care and instead should purchase discrete specific services — ideally a bundle of care to deliver a specific pre-defined outcome.  In this model, a patient buys insurance and broad access to providers and, when a health need arises, receives a budget for his or her episode of care.  We favor reference-based pricing so the patient can purchase an episode outcome without additional cost sharing while retaining the option to pay more if he or she chooses.  The market is thus incentivized to manage to a specific outcome in the most cost-effective way and to compete on delivering extra value for those patients who are willing to pay more.

To meet demand for bundled payments organized around episode outcomes, the supply-side should realign into specialized care units that focus on a few procedures, organ systems, or disease areas — a broad PPO network.  This has historically proven successful for elective conditions: the Dartmouth Spine Center has a surgery rate of 10 percent –- lower than the national rate –- with 100 percent of patients reporting their needs were well met.  Other examples of episodic providers include ambulatory surgery centers, orthopedic and cardiovascular specialty hospitals.  We also foresee capitated systems managing population health procuring discrete episode services from specialty providers, since these providers should be able to offer equal or better outcomes at lower prices than an Accountable Care Organization (ACO), integrated delivery system, or multispecialty group.

Patients with chronic disease.  For chronic disease patients, the primary outcome goal is to minimize a condition’s short-term inconveniences and long-term complications.  The market should thus incentivize a long-term perspective centered on patient engagement, adherence, and side-effect prevention.  On the demand-side, customers should purchase care from a provider able to care for all aspects of a patient’s condition, which creates incentives for all players –- patients, doctors, providers, and drugmakers –- to manage cost.  The basis of competition should be the ability to deliver annual health and complication avoidance at lower costs.

In this model, incentives are most aligned when providers are paid using a risk-adjusted capitated payment.  To compete, providers should organize in organizations such as ACOs, Patient-Centered Medical Homes (PCMHs), multispecialty groups, or integrated delivery systems with strong capabilities in managing risk, population health, and costs.  Examples of these types of systems are Group Health, Geisinger, Kaiser Permanente, Healthcare Partners, and CareMore.  In this model, incentives for patients to adhere to treatment plans, and remain in the system of care, are reinforcing.

Complex patients.  Finally, there are certain conditions that are too complex to fit into either market, such as complex cancers, high acuity conditions, and rare diseases.  These conditions often exhibit both chronic condition and episodic characteristics and are best managed by academic medical centers or high-acuity specialty facilities like comprehensive cancer centers and children’s hospitals.

Our view is that the current fee-for-service system is the best approach for handling these cases.  To constrain inflation and encourage competition, fee for service should be coupled with utilization review, incentives to use evidence-based care, and transparency around risk-adjusted outcomes and expected out-of-pocket costs.  Paying for these as episodes will not work because high patient heterogeneity exists and the size of the market cannot support competition at the episode-level.  Moreover, it is hard to define quality and value for many of these types of patients and conditions.

The Way Forward: Turning Theory Into Practice

These payment models and provider organization approaches maximize value by encouraging healthy patients to get their conditions fully resolved for a fixed price, chronic disease patients to access a care team rewarded for avoiding complications, and complex patients to receive customized care and access from specialists.  Furthermore, each of the three submarkets –- healthy patients, chronic disease, and complex and rare conditions –- is large enough to be self-sustaining and attractive.  We estimate that the chronic condition market is $1.1 trillion, the episodic care market is $760 billion, and the residual fee-for-service complex and rare conditions market is $900 billion in 2011.

The three markets are growing at similar rates.  (See exhibit 1, click to enlarge.)  If the economics are aligned, they will also be able to create growing value for patients through productivity gains, falling prices, better outcomes, and far better patient experiences.  Fortunately, each of these markets and provider models exist today in many geographies.  They are just not widespread enough or coexistent.


Giving consumers more insurance options.  Transforming theory into a tangible system presents certain challenges, which we believe will be overcome in the next few years.  First, patients need insurance product choices.  The advent of state exchanges and community rating are catalytic events that could lead to this reorganization if exchanges permit reference-based pricing plans and allow integrated delivery offerings with narrow networks.  The employer market is already moving down this path with the marked increase in defined-contribution health benefits supported by private exchanges, where higher cost sharing and narrow network plans are often offered.  We are also seeing many more employers shifting to reference-based pricing and episode bundling approaches for elective conditions to rewards employees for selecting high-quality, lower-cost providers, and to encourage providers to offer a full course of care for a bundled price.

Provider restructuring.  On the provider side, theoretically overhead should not increase, but should decrease.  The largest providers that can pull in adequate populations will focus on patient and population health, a trend already being seen with groups like Partners Healthcare shifting to an ACO and capitated payment orientation.  Competition will also lead to emergence of more specialized providers for acute and episodic care among community hospitals.  Already for complex and rare conditions, regional centers like the Mayo Clinic and traditional academic medical centers exist.  The push to submarkets should accelerate the provider landscape transformation and reduce the extraneous providers that lack focus and a niche.

The biggest barrier today is linking benefit designs and reimbursement models with patient segments.  Once commercial payers approach providers with products that segregate patients into these segments with corresponding reimbursement, providers will rapidly reorganize to serve the segments that they are most competitive at supporting.  While this approach does generate more ACOs and PCMHs, the past year has shown that these can be formed relatively quickly to meet demand.  The emergence of retail-oriented primary care providers also indicates that episodic care models are able to proliferate and scale in response to demand.

Addressing changes in consumer health needs.  One additional challenge will be how patients react when health needs change mid-year.  All patients regardless of submarket will have certain basic aspects:  insurance, preventive care, and consumer protections.  The value in longitudinal care is irrespective of submarket and hugely valuable to reducing the growth of health care costs. As health needs adjust for patients during the year, we see two potential solutions. First, patients will still have access to other submarkets to receive the necessary care.  Second, the pool of patients who will need product adjustments will be significant, and the value cannot be ignored by payers. Thus, some supplemental plans may emerge that enable patients to gain access to additional types of providers.

Matching patient needs and demand with specific types of providers and reimbursement approaches is better for patients.  If incentives are aligned with the types of value desired by different types of patients, price increases should no longer outpace value creation, and providers will compete and differentiate in ways that are most valued by their core patient constituencies.   Doing this through the creation of well-functioning submarkets — instead of forcing a single, ill-functioning market — should also unleash productivity gains as providers specialize around narrower segments and stop investing in services that they do not do well, do at scale, or need.

Overcoming the barriers will be a significant challenge.  However, we are already seeing some shifting in the health care landscape, in addition to certain provisions in the ACA which will come online in the upcoming years and add further movement.


Why The Supreme Court Decision On Health Care Reform Doesn’t Really Matter

This post first appeared in Forbes.

When the Supreme Court announces its decision on the constitutionality of the Affordable Care Act (ACA) it will kick off a storm of analysis around the political impact.  An outcome that upholds the law as a whole will be seen as a positive for the President whereas a decision that strikes down some or all of law down could be a boon for the Republicans and Governor Romney’s campaign.  The middle scenario results in parts of the law such as the individual mandate to being struck down, with the rest upheld.  While it is impossible to predict the outcome, it is a safe prediction that the decision will be followed by a lot of punditry on cable news channels.

This debate misses one essential fact:  the Supreme Court’s decision does not matter as much as political pundits think.  The American healthcare system is in the midst of intense experimentation and change that cannot, and will not be stalled by the whims of the judicial system.  The major forces behind the changes in our healthcare system—rising costs, an older, sicker population and technological innovation—show no signs of abating and do not depend on Federal legislation.  They are fuelled by private sector demand, not policy preferences in Washington.  So, while coverage may not come as soon as hoped for the uninsured, the systemic efforts to make our health system more affordable and higher quality will continue.

Some changes, like the widespread replacement of paper records by electronic medical records, are visible from a patient’s-eye-view; others, such as changes in the relationship between insurers and hospitals are not immediately visible.  These less-visible changes are potentially even more profound.

Some of the most important changes involve the financing of health care.  In the traditional way of doing business, insurers paid hospitals and physicians under a “fee for service” system; essentially they are paid for the quantity of medical services provided, regardless of the outcome.  Unfortunately, fee-for-service is inflationary, giving hospitals a perverse incentive to focus on driving up volumes and activity without regard for cost, value, or achieving better outcomes.  Even if a patient gets the wrong care or inadequate care, the hospital is paid to treat complications or readmissions rather than to prevent them.  Nobody—especially not the patient—wins and all of us pay.

HMOs grew in the 1990s in an attempt to fix these incentives by combining the insurer and the care provider, but patients hated their HMOs because they restricted choice without proving that they were delivering better care.  The current wave of experimentation, in flavors such as bundled payments for “episodes” of care (e.g., paying for everything associated with a diagnosis, procedure, and treatment in a single payment), accountable care organizations (ACOs), and new penalties for excessive hospital readmissions are all tactics to try to fix these misaligned incentives and aim to cut costs and improve care for the patient, while preserving choice.

Further proof that these reforms have staying power well beyond HMOs is the recent pledge by United Healthcare, the nation’s largest private insurer, to follow many of the regulations included in the ACA even if the law is repealed.  Other insurers, includingAetna and Humana quickly made similar pledges.

Even Congress’ current gridlock cannot completely choke off policy innovation at the federal level.  The Center of Medicare and Medicaid Services has used its authority to experiment with a variety of demonstration projects that do not require congressional approval.  One example is the current project to award bonus payments to Medicare Advantage plans which score well on the program’s Star rating system for providing better and more cost effective care.  While a repeal of the ACA would remove some of the agency’s authority, it would not prevent the agency from experimenting with a variety of demonstration projects that could lead to improved payment models.

Not to be left out—a great deal of experimentation is happening at the state level.  One of the cornerstones of the ACA is the requirement that each state create a “health insurance exchange” which serves as an online marketplace for individuals to purchase health insurance.  Eighteen states representing 42% of the US population are in the process or have already laid the legislative groundwork to establish exchanges.  Many of these states, including California, have indicated that they plan to continue with the exchanges regardless of the Supreme Court’s decision.

Patient attitudes are changing too.  Polling conducted by the West Wireless Health Institute shows that patients are increasingly worried about the costs of their care and taking steps to control it.  This means that they are more likely to scrutinize their health insurance benefits and opt for high-deductible health plans (HDHP).  While attitudes can be slow to change, the inexorable force of the millions of patients choosing health plans that reward shopping for lower cost and better quality care will not be stymied by the Supreme Court’s ruling.

So while it is uncertain how the Supreme Court will rule, or how the ruling will impact November’s results, it is a certainty that our health system will continue on the path of more affordability, more integrated care, and more focus on patients because consumers are demanding these changes. These unassailable trends are led by the private sector and, if anything, will accelerate regardless of the Supreme Court’s views of the meaning of the Commerce Clause.  Moreover, since the private sector contributes virtually all the profitability in the health system, it is a force far more powerful than politicians.


Network Effects are Magical

ImageNetwork Effects are magical.  They are the pixie dust that makes certain Information Technology businesses, especially on the Internet, into juggernauts.  They can be found in both consumer and enterprise companies.  Network Effects are special because they:

  1. Provide  logarithmic growth and value creation potential
  2. Erect barriers to entry to thwart would-be competitors
  3. Can create “Winner Take All” market opportunities

Network Effects are like a flywheel–the faster you spin it the more momentum you generate and enjoy.  But not all markets lend themselves to Network Effects.  They are not the same as Economies of Scale where “bigger is better.”  To be certain, Economies of Scale can give strong competitive advantage and defensibility to the first to get really big (or Minimum Efficient Scale as the economists call it.)  For example, SAP and Oracle benefit from having massive revenue bases which enable them to employ armies of engineers who develop rich feature sets and also to hire huge sales forces.  However large these companies are today, though, their growth rates, especially in their early years, were far more modest compared to those Network Effect companies whose growth resembled a curved ramp off of which they launched into the stratosphere.

There are four main types of Network Effects:

  1. Classic Networks, in which the value of a product or service increases exponentially with the number of others using it.  Communications networks like telephones, fax, Instant Messaging, texting, email, and Skype are all examples.  Metcalfe’s Law captured this as a simple equation where the Value of a network = N², where N is the number of nodes.  Typically, each node in a classic network is similar to each other and possesses both send and receive capabilities.  This will become clear juxtaposed against the other network effects below where there are different types of nodes.  Other examples of classic Networks are social networks (eg Facebook) and payments (eg PayPal).
  2. Marketplaces, where aggregations of buyers and sellers attract each other.  Lots of sellers means variety, competition, and price pressure, which all serve to attract more customers.  And because the customers flock, more sellers are enticed to participate in the marketplace.  eBay, stock exchanges, and advertising networks are all examples.  One nuance of marketplaces, however, is they differ in terms of the scale required for acceptable liquidity.  For example, ad networks can achieve sufficient reach and liquidity at relatively low levels which is why you see thousands of online ad networks, where they each exhibit network effects but not in a winner take all fashion.  Stock exchanges and payment networks require far greater scale for network effects to operate, which is why you see much greater concentration in these industries.
  3. Big Data Learning Loops.  “Big Data” is all the rage in techland, but just having gobs of data is not necessarily a Network Effect, nor any sort of competitive advantage per se.  What you really need is unique data and algorithms that process that data into insights which then lead to decisions and actions.  A flywheel effect comes when you get a critical mass of data that you mine for insights; pump that value back in to your product or service; which attracts more users which get you more data.  And so on.   Venrock portfolio company Inrix is a good example, where they mine GPS data points to derive automotive traffic flow data.  The more commercial fleets, mobile app users, and car companies they can get data from, the better their traffic analysis becomes, which gets them more users and hence more data.  They turn data into an accuracy advantage that earns them the right to get even more data.
  4. Platforms are a very special and powerful form of network effects.  In Information Technology, a true “platform” is where other developers build technology and businesses on top of your technology and business because you offer them one or more of the following:
    1. Lots of users/customers, and you represent a distribution opportunity for them
    2. Compelling development tools, technology, and (sometimes) advantageous pricing
    3. Monetization opportunities

Example include Operating Systems like Microsoft Windows, Apple App Store, and Amazon Web Services.

Each of these four types of network effects can be extremely powerful on their own.  Yet, even more power is derived when a business can harness multiple types of network effects in synergistic ways.  Google, Apple and Facebook do this for sure, but a less well known example is Venrock portfolio company AppNexus that operates a real-time online advertising exchange and technology platform.  The exchange aggregates advertisers, agencies, publishers and ad networks for marketplace liquidity, but also offers a hosting and technology platform for other AdTech companies and ad networks to augment their own businesses.  And the vast troves of data AppNexus processes every millisecond flows back into the system as optimized and targeted ad serving.

Network Effects are what you want fueling your business.  Sometimes you just need to get clever about discovering and harnessing them.


Meaningful Use Of Health IT Stage 2: The Broader Meaning

This post first appeared in the Health Affairs Blog.

On February 24, the Centers for Medicare and Medicaid Services (CMS) and the Office of the National Coordinator for Health Information Technology (ONC) issued the proposed “stage 2” rules for the meaningful use of electronic health records.   Stage 2 unequivocally lays out three bold requirements that are sure to be transformative to the United States healthcare system over time.  First, it standardizes data formats to dramatically simplify how information is both captured and shared across disparate IT systems.  This will make healthcare IT systems truly interoperable, one outcome of which will be greatly expanding patients’ abilities to choose where to receive care.

Second, it is emphatic that patients be able to access and easily download their healthcare records and images for their own use.  This will spawn an industry utilizing this “big data” to provide solutions to patients and providers that help manage care, shop for care, and even invent new models of care delivery. Third, it expands the scope of tracked quality metrics to include specialists and to reflect outcomes as well as care coordination.

Together, these three major requirements will drive the birth of new payment models and incentive structures leading to improved productivity and outcomes.  As a result, Stage 2 will fuel an ecosystem of companies attacking healthcare inefficiencies in ways that are not yet even imagined.

“Stage 2” commences in 2014 for providers who demonstrated stage 1 meaningful use in 2011.  For all others, it begins in year four of meaningfully using an electronic health record.  Providers who meaningfully use electronic health records will receive $44,000 in Medicare incentives over five years.  For those who do not meaningfully use electronic health records, penalties begin in 2015 that grow up to a 5 percent reduction in Medicare reimbursement over five years.

To qualify as a stage 2 “meaningful user” of electronic health records, providers need to comply with, and track, 20 functional metrics and 12 clinical quality measures. (See Exhibit 1 below, click to enlarge)  The functional metrics are very similar to stage 1 albeit with higher performance thresholds in many cases.   In keeping with the theme of stage 2, CMS emphasizes data sharing, patient engagement, and decision support in order to improve clinical quality measures.


Rationalizing quality metrics. The clinical quality measures represent a major advance, aligning quality scorecards across HHS’ programs.  A major burden for providers to date has been the checkerboard of clinical quality measures that HHS programs (PQRI, ACO, NCQA-PCMH, CHIPRA) ask providers to report.  Stage 2 aligns all of these programs to satisfy the clinical quality measure reporting requirements. Hopefully, private payors will also adopt these measures for their quality programs since they will be already incorporated into certified electronic health records.  If this occurs, it would radically reduce the complexity and burden of quality reporting for providers, and thereby increase clinicians’ focus on improvement.  Furthermore, by expanding the pool of potential metrics, specialists will be able to select logical sets of measures to track and report with required decision support.

Liberating data. While the mechanics are important, what makes stage 2 transformational is that big data sources and uses – likely across payors — will inevitably emerge.  This will be a byproduct of the requirement that all data be captured using the same standards, the expansion of quality measures to cover a majority of healthcare spending, and the ability for patients to download data.  The proposed rule requires providers to have at least 10 percent of their patients “view, download, or transmit to the third party their health information.”  In a short time, an enormous trove of data will flow outside the electronic health records of physician offices.

Big data offers great potential.  In other sectors, mining large data sets has led to breakthroughs in productivity, consumer experience, and cost structure.  It is also needed to make approaches such as IBM’s Watson super computer practical for healthcare, as the quality of machine learning results depends substantially on the amount of data available.  It will not be long until patient level information is combined with large existing data sets like those being liberated by the Health Data Initiative.  These combinations will generate far more accurate predictive modeling, personalization of care, assessment of quality and value for many more conditions, and help providers better manage population health and risk-based reimbursement approaches.

Big data also enables payment reform.  Stage 2 solves the circular problem of needing big data to support non-fee-for-service payment models, and needing new payment models to stimulate the production of the data.  Stage 2 finally removes the barrier of lack of access to data through the ubiquitous reporting of so many quality measures and downloaded patient records.  The transparency facilitated by far deeper, richer, and more timely and specific information will enable payors, consumers, and employers to pay differently for care.

At a minimum and initially, huge variations in prices will be arbitraged, reaping savings for patients and payors.  Over the longer term, it is likely that many variations of episode-based payments will emerge that are tied to specific improvement in outcomes; geographic markets will expand for patients shopping for care, thereby increasing competition; and new lower-cost delivery models will emerge for lower acuity and chronic conditions.

The need for privacy and security. The big concern is rightfully privacy and security.  Nothing would stifle progress faster then misuse of data.  While the meaningful use program has specific security requirements for providers, it will be important for existing privacy rules and security requirements to be rigorously enforced.   The virtuous cycle of innovation enabled by data liberation depends on trust by patients and providers.  Patients will need confidence that the value they get from contributing their health data to datasets outweighs the risks.  Providers will need similar confidence to encourage their patients to access their data and use the tools that will emerge.  One hopes that a market quickly develops that has a high bar for privacy and security, engages patients in their health and healthcare, fairly represents provider performance, supports shared decision-making, and helps providers achieve clinical goals that increasingly are linked to reimbursement.

While the public comment period will inevitably surface many skirmishes over details, the final rule should inculcate a path towards 1) new reimbursement models that reward outcomes and coordination; 2) massively more data to support patients and providers; and 3) a far more dynamic marketplace.  Providers will be well served to view stage 2 not as a requirement to better use their electronic health records, but as a foreshadowing for how to compete and thrive in a future that is coming sooner then most anticipate or are prepared for: a future where a provider’s ability to deliver reliable outcomes, economic value, and exceptional patient experiences will soon be transparent to peers, competitors, payors, and, most of all, patients.


How to Moderate a Panel That Doesn’t Suck

On April 14th I am moderating a panel at the Digital Healthcare Innovation Summit in New York City titled “The Hospital as Production Center:  Holy Grail or Impossible Dream?” [For anyone who wants a discounted registration rate, see the end of this post.]  In an effort not to suck, I’ve put some thought into what makes a great panel.  Like many conference junkies in the tech and finance worlds, I’ve sat through hundreds of panels, been on a bunch, and moderated a few handfuls over the years.  Here’s a list of a dozen suggestions that I plan to implement:

  1. Have at least one colorful character on the panel. Conferences can be a grind, and lots of people find the most value is in the lobby, meeting people.  For those willing to actually sit through your panel you want to entertain them as well as inform them if you expect them to pick their heads up from their smartphones and remember anything from the hour of so they give you of their (partial) attention.  Having at least one spicy rebel on the panel that is willing to share provocative views and mix it up with the other panelists is key.
  2. As the moderator, get your panelists on a call ahead of time to brainstorm and interact with each other. This is your opportunity to figure out if you’ve got the right mix of characters and also form a plan for what you’ll cover and set expectations.  Don’t procrastinate on this.
  3. Know your audience. Conference organizers purposely cast a wide net in their marketing and promotional materials so they can get the best turnout.  Find out for sure who the bulk of the audience is really likely to be.
  4. Send questions ahead of time. Your goal as panel moderator is to make your panelists look brilliant, not to try and stump them so you look like the smartest person on stage.  Give them the questions ahead of time and know who is likely to have the best answers for each of them.
  5. Keep intros brief.  Maybe not at all. Most intros take too long and are pretty boring.  If the conference materials have speaker bios, I personally don’t think there is any need to go into detailed introductions other than to identify who is who.
  6. Know the context of the rest of the conference.  Pay attention and make reference. Planning your topics and questions ahead of time is great, but you want to keep in mind the context of the rest of the conference so there is minimal duplication but appropriate linkages to other topics and speakers, etc.  If prior speakers or panels have covered topics relevant to your panel, make reference to them.  It shows the audience you were not sleeping through the earlier sessions, so maybe they won’t sleep through yours.  J
  7. Use social media to promote, distribute, and even moderate in real time. Twitter, LinkedIn, Facebook, your blog, are all great ways to promote your panel ahead of time.  SlideShare is a great way to distribute PowerPoint or materials afterwards.  Set up a Twitter hashtag to solicit questions ahead of time and from the audience during the event.  I’ll be using #hosprod as the Twitter stream for my panel.  Feel free to send me questions ahead of time, and check for comments during the panel.
  8. Hit the hard deck, dig for details and examples. Give the audience reasons to take notes by getting granular.  Force the panelists to get specific and give real information.
  9. Stir the pot.  Incite a riot. A panel where everyone agrees with every point is boring.  Elicit differing viewpoints and force the discussion to explore the conflicting opinions.  This will likely be the most useful content, as well as the most entertaining.  Avoid chair throwing.
  10. No crop dusting. It can be very monotonous when the moderator goes up and down the row asking each panelist each question.  Pick your respondents strategically and use them for different purposes.  Move on to the next question as soon as the topic has been sufficiently covered, regardless of whether everyone answered.
  11. Engage the audience, but moderate ruthlessly. Audience Q&A can be very useful and fun, but can also attract rambling questions, people shamelessly plugging their own company/viewpoint, or all manner of unexpected divots.  It’s your job to be respectful but firm in keeping the Q&A on track out of respect to the rest of the audience.
  12. Watch the clock. The ultimate respect for your audience is to finish on time.  Even if your panel is rockin’ and everyone is having a great time, you should finish within the allotted timeframe.  If they still want more, they can follow-up with you and the panelists afterwards.

If you are interested in attending the enter the special key code VNRPR  to receive the discounted rate of $695.00.  You can also contact Cathy Fenn of IBF at (516) 765-9005 x 210 to enroll.


Forget Super Bowl Commercials…these web companies know how to create awesome marketing videos.

By the time you read this post, Super Bowl XLV will be over and everyone will be talking about the … commercials.  Why?  Because most of them are entertaining, some are memorable, and the $2.5 million price tags (for air time alone) pique our curiosity.  Why are brands willing to pay so much?  Because it is one of the only ways to reach 100 million consumers simultaneously, and because a great 30 second video ad packs an emotional payload in support of your brand unlike virtually any other form of advertising.

Over the past few years I’ve noticed more and more web companies producing great videos to market their companies, often presenting them front and center on their homepage as the introduction to their company.  A great video overview can really help explain what you do for customers, how you do it, and present your brand in a flattering light.  The best videos go viral and bring you exponential attention and new visitors.   And web videos have never been cheaper to produce (at 1/2000th the cost of a super bowl commercial even a start-up can afford them.)  So, here are five thoughts on what makes a great marketing video for web companies, and a bunch of examples:

Answer WIIFM: A great marketing video should clearly and convincingly articulate a few simple benefits that customers care about. does a terrific job of this, as does Dropbox, both front and center on their homepage.  The Dropbox video is particularly noteworthy because it takes an esoteric concept and uses analogy to demonstrate user benefits everyone can relate to.

Show how it works: A great overview video shows just enough of the product and how it works to lend credibility to the benefit statement.  Word Lens does a terrific job of this for a product that truly needs to be seen to be believed.  A full blown demo would have been less effective than just these short glimpses of the product in action.

Be yourself: Video is such a rich and engaging medium it is perfect for showing the personality of your brand.  It is a great way to set tone and speak to your customers and prospects in an authentic voice. does a terrific job of this through music and images alone, letting actions speak louder than words in convincing you that they can make your personal homepage look amazing because they do such a killer job of presenting themselves through this video.  Style personified.

Be fun, get remembered: Great marketing videos are fun to watch and somewhat memorable.  You don’t have to be knee slappin’ funny or so hip it hurts, just smile-inside funny will go a long way.  SalesCrunch and SolveMedia both take pretty dry categories (CRM SaaS and AdTech respectively) and rivet their viewers through entertaining use of cartoons and wit.

Be Brief: Even a great marketing video starts to feel long after two minutes.  Shoot for less.  This video from Smartling gets the job done in 38 seconds.  [Disclosure: Smartling is a Venrock investment.]

These are the five characteristics which I think make for a great marketing video for your web company.  If you think there are points I missed, or have other great examples, please comment and add to the list.  If you are the production agency responsible for making any of these videos please take a bow by claiming your work.  I’m sure others will want to contact you.  If you are looking for more of a live action marketing video, SmartShoot and other online videographer marketplaces can help produce custom video for ridiculously low rates.

Thank you to Ward Supplee, David Pakman, Dev Khare, Dan Greenberg, and Arad Rostampour for sharing some ideas for this post.


The single best financial reporting tool ever

Today I faced a choice.  Should I go out and enjoy the beautiful weather and waves and go for a surf or should I blog about my favorite financial reporting tool?  Seems like a pathetic question for a surfer to ask, or maybe this financial reporting tool is really that great.  I’ll settle for an answer of “both”.

The tool in question is the Waterfall Chart.  It’s a way to compare actual results across time periods (months or quarters usually) against your original Plan of Record, as well as forecasts you made along the way as more information became available.  It packs a ton of information into a concise format, and provides management and Board members quick answers to the following important questions:

1.      How are we doing against plan?  Against what we thought last time we reforecast?

2.      Where are we most likely to end up at the end of the fiscal year?

3.      Are we getting better at predicting our business?

The tool works like this:

Across the top row is your original Plan of Record.  This could be for a financial goal like Revenue or Cash, or an operating goal like headcount or units sold.  Each column is representative of a time period.  I like monthly for most metrics, with sub-totals for quarters and the full fiscal year.  Each row below the plan of record is a reforecast to provide a current working view of where management thinks they will wind up based on all the information available at that time period.  Click the example below which was as of August 15, 2010 to see a sample, or click the link below to download the Excel spreadsheet.

click to enlargeWaterfall Report spreadsheet

Periodic reforecasting does not mean changes to the official Plan of Record against which management measures itself.  Reforecasts should not require days of offsite meetings to reach agreement.  It should be something the CEO, CFO, and functional leaders like the VP Sales or Head of Operations can hammer out in a few hours.  Usually these reforecasts are made monthly, about the time the actual results for the prior month are finalized.  When you have an actual result, say for the month of August, $2,111 in the example above, this goes where the August column and August row intersect.  On that same row to the right of the August actual you will put the new forecasts you are making for the rest of the year (September through December.)  In this fashion, the bottom cells form a downward stair step shape (a shallow waterfall perhaps?) with the actual results cascading from upper left to lower right.  You can get fancy and put the actuals that beat plan in green, and those that missed in red.  You can also add some columns to the right of your last time period to show cumulative totals and year to dates (YTD).  With or without these embellishments you’ve got some really powerful information in an easy to visualize chart.

Two questions an entrepreneur might ask about this tool:

By repeatedly comparing actual to plans and reforecasts, won’t my Board beat me up each month if I miss plan or even worse, miss forecasts I just made? If you are a relatively young company, most Board’s (I hope) understand that planning is a best-efforts exercise not an exact science.  Most Boards will react rationally and cooperatively if you miss your plan, as long as you avoid big surprises.  By giving the Board updated forecasts you decrease the odds of big surprises because the latest and best information is re-factored in to the equation as the year progresses.  They probably won’t let you stop measuring yourself against the Plan of Record, but at least you’ve warned them as to how results are trending month to month and course corrections can be made throughout the year.

Won’t this take a lot of time? Hopefully not a ton, but it does take effort.  However, it should be effort well worth it beyond just making the Board happy, because as a management team you obviously care about metrics like cash on hand, and this should be something you are constantly recalibrating anyway.  The waterfall is the perfect tool to organize and share this information.

Most of my companies using this tool track five to ten key metrics this way.  Typical metrics include:

  • Revenue
  • New bookings
  • Cash on hand
  • Operating expenses
  • Net income
  • Headcount
  • Units sold or new customers acquired
  • Some measure of deployed/live customers (if there is a lag between a sale and a live customer)
  • For internet companies, some measure of the “top of the funnel” such as Unique Visitors or Page Views

Whether or not you agree this is the single greatest financial reporting tool ever, I hope you give it a try and find it useful.  Now I’m going surfing….