VRM Research opportunities: Difference between revisions

From Project VRM
Jump to navigation Jump to search
Line 66: Line 66:
* Potential issues / items to resolve:
* Potential issues / items to resolve:
** Is this a compelling enough experiment? Will we learn meaningful things? Are we really measuring FREE vs. CAPTIVE customer experiences?
** Is this a compelling enough experiment? Will we learn meaningful things? Are we really measuring FREE vs. CAPTIVE customer experiences?
*** Is our definition of free and captive so arbitrary or context-specific as to lose experimental merit?
*** Is our definition of free and captive too arbitrary or context-specific as to lose experimental merit?
** Is the fundamental recommendation experience valuable enough to encourage listening and purchasing of music tracks and ultimately, sharing with friends? Is there a way to make this more compelling?
** Is the fundamental recommendation experience valuable enough to encourage listening and purchasing of music tracks and ultimately, sharing with friends? Is there a way to make this more compelling?
** Can we collect information and disclose its purposes in such a way as to accurately leverage recommendation APIs and not be deceitful, yet still create a clear and compelling delineation between free and captive participation?
** Can we collect information and disclose its purposes in such a way as to accurately leverage recommendation APIs and not be deceitful, yet still create a clear and compelling delineation between free and captive participation?

Revision as of 09:12, 10 December 2009

Project Overview

Objectives

Our primary goal is to test one or more basic VRM principles (e.g. benefits of vendor openness, willingness of users to pay for perceived value in the absence of existing payment mechanisms provided by the seller). Results of research efforts will guide the expression of VRM principles, and, presumably, drive their adoption.

Additional benefits include bringing together passionate participants around a research project, demonstrating and furthering Berkman research methodologies an dsoftware -- and forcing some clarity and learning around testable characteristics of VRM.

Testable Principles

Generally speaking, VRM's vision is equip individuals with tools that make them independent leaders and not just captive followers in their relationships with vendors and other parties on the supply side of markets. VRM is successful when customers see direct benefits from taking control of their relationships, and vendors see alternatives to customer lock-in for gaining loyalty and generating profit.

This vision makes several assumptions. Primarily, that a free customer is more valuable than a captive one. Testing this hypothesis (or more accurately, specific versions and aspects of this hypothesis) should be our primary goal. This hypothesis begs at least two important questions:

What characterizes a free customer?

  • Able to choose how to relate to a vendor
    • Customer relies on tools and data under their control to relate to and manage vendors
    • Choose what information to share and when
    • Choose how this information can be used (i.e. under what terms), for example:
      • Customer-generated data must be portable
      • Customer-supplied data must be retractable
      • Customer-supplied data can't be used for targeted advertising / marketing messages
      • etc.
    • Customer receives a copy of data that is provided or generated as part of doing business, e.g. transaction data
    • Full disclosure on how customer supplied-data is being used (privacy policy)
    • Options for terminating relationship at will and without penalty

What are the potential benefits to a vendor for freeing a customer?

  • Decreased cost/hassle of gathering, storing, and managing customer data where the customer is relying on their own tools
  • Increased attention / visibility to vendor for being open, i.e. being the open alternative in the market
  • Increased participation from customers wanting to engage with open businesses
    • Both initial willingness and ongoing enagement
  • Increased sharing / customer WOM around open products / services
  • Increased volume and quality of customer-supplied data
  • Decreased guesswork by the vendor if the customer is telling them exactly what the want when they want it - or at least more/better information about themselves
  • Increased customer trust / loyalty / goodwill (longer term?)
  • Increased external innovation and value being generated around vendor services / data
    • e.g. if a vendor opens their transaction data, a 3rd-party service might help customers better manage their electronic receipts,
  • Development of an ecosystem of value around vendor services, creating the open version of customer lock-in
    • e.g. Good services based on open transaction data encourage continued use of open transaction data provider

Open Questions

  • Similarity to "free culture" arguments, e.g. what are the benefits to CC Licensing. Prior research already done here?
  • What aspects of the benefits above are perceptual vs. technical? How might we measure and test these?

Specific Research Proposals

Present users with a scenario that tests specific aspects of the hypothesis that a free customer is worth more than a captive one. Use mechanical Turk and Berkman-developed web and measurement software tools for completing web-based, personal data-gathering scenarios - Doc Searls and Keith Hopper with help from Jason Callina, Joe Andrieu, Tim Hwang and Aaron Shaw.

Proposed Experimental Scenario

This scenario entails a music recommendation process, where participants are asked to share information to online music vendors in exchange for personalized song recommendations. Different scenarios within this experiment will test participants' willingness to engage, exchange information, and share the experience with others.

  1. Participant selects the Amazon HIT and agrees to complete an associated online process in exchange for a small amount of money (e.g. $.10)
  2. Participant is randomly assigned to one of two groups (free or captive)
  3. Both groups are presented with an identical, multi-step information gathering process - specifically, to provide music preference information (e.g. favorite artists and tracks) along with personal demographic information (e.g. name, address, sex, age, etc.). All questions/fields are optional.
  4. At the end of the information gathering process, both groups are informed they have completed the requirements to redeem their earnings. Additional steps taken at this point (e.g. listening, sharing) are not required.
  5. Upon completion of the entire process, both groups are provided the option to share a link to this project with a friend (or on twitter, facebook, etc).
  6. The difference between free and captive experiences are as follows:
    1. Before information gathering begins, the free group is informed that the information that they are being asked for is collected strictly for their own use, with the option to share it with one or more vendors once the process is complete (privacy policy available?). For each vendor the free participant chooses to share their data with, the vendor provides recommendations for artists and songs which they can choose to listen to or purchase (or ignore).
    2. Before information gathering begins, the captive group is informed that the information that they are being asked for is being collected by and for a single vendor for the purpose of providing music recommendations that will be presented for listening and purchase. Upon completion of the information gathering process, relevant data is automatically shared with the vendor (user is not given a choice), and artist and song recommendations are provided in return. The participant can choose to listen to or purchase these tracks.
  • Some aspects to test:
    • Will free participants be more likely to complete the process than captive ones?
    • Will free participants provide more data to more vendors than captive ones?
    • What types of data might free participants be more willing to provide?
    • Will free participants be more likely to share the experience with their friends?
    • Will free participants be more likely to listen and purchase recommended songs?
    • How will specifics of the experience and specific wording affect individual's willingness to participate?
    • Are there certain individual characteristics (e.g. age) that predict willingness to participate?
  • Potential issues / items to resolve:
    • Is this a compelling enough experiment? Will we learn meaningful things? Are we really measuring FREE vs. CAPTIVE customer experiences?
      • Is our definition of free and captive too arbitrary or context-specific as to lose experimental merit?
    • Is the fundamental recommendation experience valuable enough to encourage listening and purchasing of music tracks and ultimately, sharing with friends? Is there a way to make this more compelling?
    • Can we collect information and disclose its purposes in such a way as to accurately leverage recommendation APIs and not be deceitful, yet still create a clear and compelling delineation between free and captive participation?
    • How will we control for willingness to purchase/listen if one process potentially alters the quality of the music recommendations? Is this necessary to control for?
    • Should captive participants also have the post-process option of sharing the data with multiple vendors?
    • Should other free group options exist, such as the ability to download your entered information in a standard format or share your preferences and recommendations with a friend?
    • How will participants be paid so as not to influence whether or not they choose to provide information (or alternatively, simply skip the process and collect their $.20)?
    • What music recommendation APIs are available, what types of data do they require to generate quality recommendations (and is this standard)?
    • How might trust issues with the data collector (i.e. Berkman) influence outcomes?
    • How will the music services themselves (e.g. perceived brand trust and value) affect outcomes (and how might we control for this)?
    • What are the experimental disclosure requirements here - esp as it relates to personal information gathering that likely won't be used to generate music recommendations

Additional Scenario Possibilities

Scenario 2

  • Assign users to either the role of Vendor or Customer and pair them up. Customers gather music listening preferences and habits about themselves through either a user-driven, open tool and process or through a vendor-driven, choice-free process.
  • The results of these processes are shared with their vendor partners who are asked to make a music download recommendation to their customer based on the information shared. The vendor receives a larger reward if the customer selects their recommended download over a (smaller) cash prize.
  • This scenario goes beyond demonstrating increased sharing to test the idea that openness has the potential to generate less guesswork and increased sales for the vendor

Scenario 3

  • Require AMT participants to use Eyebrowse software to collect browser history data.
  • Create two scenarios - one that puts the user in charge of sharing what/how/to whom and another where the data is uploaded to a commercial vendor as part of the HIT.
  • Measure willingness of participants to complete the task and subsequently to upload their data for the two scenarios
  • (NOTE: Can eyebrowse allow for non-sharing of data?)

Project Status

  • Meeting with geeks on 10/29 produced some rough research directions and commitment from Berkman staffers to helping execute
  • Additional meetings (11/2, 11/3) with Keith Hopper and Jason Callina and Keith Hopper and Tim Hwang to discuss possible scenarios and where to seek additional advice/support
  • There are clear benefits to producing research not only for the VRM community but also for the business community. Both Zeo and Personal Black Box (interestingly, both startup orgs) have expressed a strong interest in research that helps clarify and "prove" the benefits of vendors opening up control to the user.
  • Specific research proposal is shaping up involving the use of Amazon Mechanical Turk and based on code and data acquisition mechanisms already constructed and tested by Berkman staff for other research projects (cooperation project). See Specific Research Proposals.

Sources/Background