User:Alexander L. Davis/Notebook/In the Problem Pit/2013/03/14: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
(fix raw html notebook nav)
 
(7 intermediate revisions by one other user not shown)
Line 2: Line 2:
|-
|-
|style="background-color: #EEE"|[[Image:owwnotebook_icon.png|128px]]<span style="font-size:22px;"> Project name</span>
|style="background-color: #EEE"|[[Image:owwnotebook_icon.png|128px]]<span style="font-size:22px;"> Project name</span>
|style="background-color: #F2F2F2" align="center"|<html><img src="/images/9/94/Report.png" border="0" /></html> [[{{#sub:{{FULLPAGENAME}}|0|-11}}|Main project page]]<br />{{#if:{{#lnpreventry:{{FULLPAGENAME}}}}|<html><img src="/images/c/c3/Resultset_previous.png" border="0" /></html>[[{{#lnpreventry:{{FULLPAGENAME}}}}{{!}}Previous entry]]<html>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</html>}}{{#if:{{#lnnextentry:{{FULLPAGENAME}}}}|[[{{#lnnextentry:{{FULLPAGENAME}}}}{{!}}Next entry]]<html><img src="/images/5/5c/Resultset_next.png" border="0" /></html>}}
|style="background-color: #F2F2F2" align="center"|[[File:Report.png|frameless|link={{#sub:{{FULLPAGENAME}}|0|-11}}]][[{{#sub:{{FULLPAGENAME}}|0|-11}}|Main project page]]<br />{{#if:{{#lnpreventry:{{FULLPAGENAME}}}}|[[File:Resultset_previous.png|frameless|link={{#lnpreventry:{{FULLPAGENAME}}}}]][[{{#lnpreventry:{{FULLPAGENAME}}}}{{!}}Previous entry]]&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}}{{#if:{{#lnnextentry:{{FULLPAGENAME}}}}|[[{{#lnnextentry:{{FULLPAGENAME}}}}{{!}}Next entry]][[File:Resultset_next.png|frameless|link={{#lnnextentry:{{FULLPAGENAME}}}}]]}}
|-
|-
| colspan="2"|
| colspan="2"|
Line 16: Line 16:


* It is going to be important to draw the citizen science sample from the same population as those who are offered the program.  If people use self-projection, then those projections will be most valid from the actual sample, rather than MTurk masters participants who may be idiosyncratic.
* It is going to be important to draw the citizen science sample from the same population as those who are offered the program.  If people use self-projection, then those projections will be most valid from the actual sample, rather than MTurk masters participants who may be idiosyncratic.
* I feel like their ability to generate questions is not so good.  They seem to pick up on the intuitive things that most people would think of: do they want the device, do they think the money is enough.  What methods can we use to help them generate effective questions using their knowledge?  Are we already tapping that?


==Unexpected Observations==
==Unexpected Observations==
* Insert content here...
* The person in the configural condition had an unexpected approach, asking progressive questions about this person's willingness to participate, as if it were a willingness to accept for the program.
 
==New Hypotheses==
==New Hypotheses==
* There seems to be a consistent self-projection element.  People think other people would do or not do for the same reasons as themselves.
* There seems to be a consistent self-projection element.  People think other people would do or not do for the same reasons as themselves.


==Current Protocol==
==Current Protocol==
* Here is the idea for the acceptance sampling method.  The distribution of new ideas (new errors) can be modeled according to a failure distribution.  We want to estimate that parameter.
* Here is the idea for the acceptance sampling method.  The distribution of new ideas (new errors) can be modeled according to a failure distribution.  We want to estimate that parameter. What is the sample size needed to do that?


==Current Materials==
==Current Materials==
Line 58: Line 61:
* Believed others would not enroll for the same reasons: invasiveness, too little compensation, and that it "might be difficult to remember what to do and when."
* Believed others would not enroll for the same reasons: invasiveness, too little compensation, and that it "might be difficult to remember what to do and when."
* Others would enroll if they didn't care about the invasiveness.
* Others would enroll if they didn't care about the invasiveness.
* The most important thing that needs to be explained is how the frame will track usage (this needs to be corrected; the smart-meter tracks the usage, so they need to know it's already being tracked, they are just now able to see it).
* Make the expectations of them clear (what is expected of me?).
* Have an image of the frame rather than a link.
* Explain why we want this info, how will it be used.
* People throw the mail away without reading it.
* Might use a telephone interview.


==Faults==
==Faults==
Line 67: Line 76:
* Broke the questionnaire up into multiple pages, with a random subset of two questions on each page.
* Broke the questionnaire up into multiple pages, with a random subset of two questions on each page.
* Cut the length by having participants do two-thirds (55) of the predictions.
* Cut the length by having participants do two-thirds (55) of the predictions.
* Need to explain how the frame works.


<!-- ##### DO NOT edit below this line unless you know what you are doing. ##### -->
<!-- ##### DO NOT edit below this line unless you know what you are doing. ##### -->

Latest revision as of 22:32, 26 September 2017

Project name Main project page
Next entry

Entry title

First Pass

Comments

These are the first two pretests. I have a vague concern about the research, not knowing exactly what the focus or story is. I am considering xx. Created a qualification in MTurk for one participant to allow him to be in future problem pit studies. I need to create an acceptance sampling method to keep the number of pre-test participants bounded.

  • The questions they wanted to ask seemed to be very specific to the problem, much more specific than we were thinking. Their main concerns were logistic issues, issues of confidentiality, security, what to do with the frame, how it works.
  • They seemed to base their questions on what would concern them, confirming the self-projection theory.
  • It is going to be important to draw the citizen science sample from the same population as those who are offered the program. If people use self-projection, then those projections will be most valid from the actual sample, rather than MTurk masters participants who may be idiosyncratic.
  • I feel like their ability to generate questions is not so good. They seem to pick up on the intuitive things that most people would think of: do they want the device, do they think the money is enough. What methods can we use to help them generate effective questions using their knowledge? Are we already tapping that?

Unexpected Observations

  • The person in the configural condition had an unexpected approach, asking progressive questions about this person's willingness to participate, as if it were a willingness to accept for the program.

New Hypotheses

  • There seems to be a consistent self-projection element. People think other people would do or not do for the same reasons as themselves.

Current Protocol

  • Here is the idea for the acceptance sampling method. The distribution of new ideas (new errors) can be modeled according to a failure distribution. We want to estimate that parameter. What is the sample size needed to do that?

Current Materials

New Data

  • Data
  • Both participants were U.S. and had MTurk masters qualification.
  • Both said they would not enroll. Seems like they were being more honest than the last sample.

Participant 1

  • "The survey itself was nicely designed, but it was frustrating to have to advance the page so often. More than one question per page is preferable, in my opinion. I didn't really like the Recruitment Document being in a separate window, I think it could be included in the survey itself, as an example, and it would be easier to reference. I thought it was really too long, with too many questions that seemed like variations on the same theme. Those are pretty much the only suggestions that I have to offer, outside of what I answered in the survey itself."
  • Has too many questions to make a decision to enroll.
    • How does the frame work?
    • How does it affect privacy?
    • Who has access to photos/data? Who owns the rights?
    • What happens if they withdraw from the study?
    • Do they have to give the frame back at the end of the study?
  • People would enroll if:
    • They have the right information.
    • They really need money.
    • They really want the photo frame.
    • They think the program could help them save energy.
  • Suggested giving $5 to people who contact us rather than $2 to everyone (I disagree with this here; pre-payment works, and $2 is a special amount because of the bill).
  • If contacted by mail, wouldn't make the "extra effort" to figure out the program details.
  • Independent questions seemed to revolve around the desires of the participant:
    • "Are you interested in finding out how to save money on your electricity costs?"
    • "Would you like to participate in a study that will benefit science?"
    • "Would you like the use of a free photo frame?"
    • "Would you be agreeable to having the frame in your home?"

Participant 2

  • Similarly felt like there was not enough information about the content and process of the program.
  • Said the program was "a bit invasive," but might consider doing it if offered more money.
  • Believed others would not enroll for the same reasons: invasiveness, too little compensation, and that it "might be difficult to remember what to do and when."
  • Others would enroll if they didn't care about the invasiveness.
  • The most important thing that needs to be explained is how the frame will track usage (this needs to be corrected; the smart-meter tracks the usage, so they need to know it's already being tracked, they are just now able to see it).
  • Make the expectations of them clear (what is expected of me?).
  • Have an image of the frame rather than a link.
  • Explain why we want this info, how will it be used.
  • People throw the mail away without reading it.
  • Might use a telephone interview.

Faults

  • Advancing page too often
  • Include recruitment document in the survey itself, rather than a separate window
  • Too long
  • Redundant questions

Corrections

  • Broke the questionnaire up into multiple pages, with a random subset of two questions on each page.
  • Cut the length by having participants do two-thirds (55) of the predictions.
  • Need to explain how the frame works.