Prescreen

AttentionTest

class psynet.prescreen.AttentionTest(label='attention_test', pages=2, fail_on='attention_test_1', prompt_1_explanation="\\n            Research on personality has identified characteristic sets of behaviours and cognitive patterns that\\n            evolve from biological and enviromental factors. To show that you are paying attention to the experiment,\\n            please ignore the question below and select the 'Next' button instead.", prompt_1_main='As a person, I tend to be competitive, jealous, ambitious, and somewhat impatient.', prompt_2='What is your favourite color?', attention_test_2_word='attention', time_estimate_per_trial=5.0)[source]

This is an attention test aimed to identify and remove participants who are not paying attention or following the instructions. The attention test has 2 pages and researchers can choose whether to display the two pages or not, and which information to display in each page. Researchers can also choose the conditions to exclude particiapnts (determined by fail_on).

Parameters:
  • label (string) – The label of the AttentionTest module, default: “attention_test”.

  • pages (int) – Whether to display only the first or both pages. Possible values: 1 and 2. Default: 2.

  • fail_on (str) – The condition for the AttentionTest check to fail. Possible values: “attention_test_1”, “attention_test_2”, “any”, “both”, and None. Here, “any” means both checks have to be passed by the particpant to continue, “both” means one of two checks can fail and the participant can still continue, and None means both checks can fail and the participant can still continue. Default: “attention_test_1”.

  • prompt_1_explanation (str) – The text (including HTML code) to display in the first part of the first paragraph of the first page. Default: “Research on personality has identified characteristic sets of behaviours and cognitive patterns that evolve from biological and enviromental factors. To show that you are paying attention to the experiment, please ignore the question below and select the ‘Next’ button instead.”

  • prompt_1_main (str) – The text (including HTML code) to display in the last paragraph of the first page. Default: “As a person, I tend to be competitive, jealous, ambitious, and somewhat impatient.”

  • prompt_2 (str) – The text to display on the second page. Default: “What is your favourite color?”.

  • attention_test_2_word (str) – The word that the user has to enter on the second page. Default: “attention”.

  • time_estimate_per_trial (float) – The time estimate in seconds per trial, default: 5.0.

ColorBlindnessTest

class psynet.prescreen.ColorBlindnessTest(label='color_blindness_test', media_url='https://s3.amazonaws.com/ishihara-eye-test/jpg', time_estimate_per_trial=5.0, performance_threshold=4, hide_after=3.0, trial_class=<class 'psynet.prescreen.ColorBlindnessTrial'>, locale='en')[source]

The color blindness test checks the participant’s ability to perceive colors. In each trial an image is presented which contains a number and the participant must enter the number that is shown into a text box. The image disappears after 3 seconds by default, which can be adjusted by providing a different value in the hide_after parameter.

Parameters:
  • label (string) – The label for the color blindness test, default: “color_blindness_test”.

  • media_url (string) – The url under which the images to be displayed can be referenced, default: “https://s3.amazonaws.com/ishihara-eye-test/jpg

  • time_estimate_per_trial (float) – The time estimate in seconds per trial, default: 5.0.

  • performance_threshold (int) – The performance threshold, default: 4.

  • hide_after (float, optional) – The time in seconds after which the image disappears, default: 3.0.

  • trial_class – Trial class to use, default: ColorBlindnessTrial.

ColorVocabularyTest

class psynet.prescreen.ColorVocabularyTest(label='color_vocabulary_test', time_estimate_per_trial=5.0, performance_threshold=4, colors=None, trial_class=<class 'psynet.prescreen.ColorVocabularyTrial'>)[source]

The color vocabulary test checks the participant’s ability to name colors. In each trial, a colored box is presented and the participant must choose from a set of colors which color is displayed in the box. The colors which are presented can be freely chosen by providing an optional colors parameter. See the documentation for further details.

Parameters:
  • label (string) – The label for the color vocabulary test, default: “color_vocabulary_test”.

  • time_estimate_per_trial (float) – The time estimate in seconds per trial, default: 5.0.

  • performance_threshold (int) – The performance threshold, default: 4.

  • colors (list, optional) – A list of tuples each representing one color option. The tuples are of the form (“color-name”, [H, S, L]) corresponding to hue, saturation, and lightness. Hue takes integer values in [0-360]; saturation and lightness take integer values in [0-100]. Default: the list of the six colors “turquoise”, “magenta”, “granite”, “ivory”, “maroon”, and “navy”.

  • trial_class – Trial class to use, default: ColorBlindnessTrial.

HugginsHeadphoneTest

class psynet.prescreen.HugginsHeadphoneTest(label='huggins_headphone_test', media_url=None, time_estimate_per_trial=7.5, performance_threshold=4, n_trials=6)[source]

Implements: Milne, A.E., Bianco, R., Poole, K.C. et al. An online headphone screening test based on dichotic pitch. Behav Res 53, 1551–1562 (2021). https://doi.org/10.3758/s13428-020-01514-0

get_trial_class(node=None, participant=None, experiment=None)[source]

Returns the class of trial to be used for this trial maker.

AntiphaseHeadphoneTest

Deprecated since version 10: Does not work with modern headphones anymore. It’s more a loudness test as it can be completed with loudspeakers. Use psynet.prescreen.HugginsHeadphoneTest instead.

class psynet.prescreen.AntiphaseHeadphoneTest(label='antiphase_headphone_test', media_url=None, time_estimate_per_trial=7.5, performance_threshold=4, n_trials=6)[source]

Implements: Woods, K. J. P., Siegel, M. H., Traer, J., & McDermott, J. H. (2017). Headphone screening to facilitate web-based auditory experiments. Attention, perception & psychophysics, 79(7), 2064–2072. https://doi.org/10.3758/s13414-017-1361-2

Note: we currently recommend using the HugginsHeadphoneTest instead.

get_trial_class(node=None, participant=None, experiment=None)[source]

Returns the class of trial to be used for this trial maker.

LanguageVocabularyTest

class psynet.prescreen.LanguageVocabularyTest(label='language_vocabulary_test', language_code='en-US', media_url='https://s3.amazonaws.com/langauge-test-materials', time_estimate_per_trial=5.0, performance_threshold=6, n_trials=7, trial_class=<class 'psynet.prescreen.LanguageVocabularyTrial'>)[source]

This is a basic language vocabulary test supported in five languages (determined by language_code): American English (en-US), German (de-DE), Hindi (hi-IN), Brazilian Portuguese (pt-BR), and Spanish (es-ES). In each trial, a spoken word is played in the target language and the participant must decide which of the given images in the choice set match the spoked word, from a total of four possible images. The materials are the same for all languages. The trials are randomly selected from a total pool of 14 trials.

Parameters:
  • label (string) – The label for the language vocabulary test, default: “language_vocabulary_test”.

  • language_code (string) – The language code of the target language for the test (en-US, de-DE, hi-IN, pt-BR,sp-SP), default: “en-US”.

  • media_url (str) – Location of the test materials, default: “https://s3.amazonaws.com/langauge-test-materials

  • time_estimate_per_trial (float) – The time estimate in seconds per trial, default: 5.0.

  • performance_threshold (int) – The performance threshold, default: 6.

  • n_trials (float) – The total number of trials to display, default: 7.

  • trial_class – Trial class to use, default: LanguageVocabularyTrial

LexTaleTest

class psynet.prescreen.LexTaleTest(label='lextale_test', time_estimate_per_trial=2.0, performance_threshold=8, media_url='https://s3.amazonaws.com/lextale-test-materials', hide_after=1, n_trials=12, trial_class=<class 'psynet.prescreen.LextaleTrial'>)[source]

This is an adapted version (shorter) of the original LexTale test, which checks participants’ English proficiency in a lexical decision task: “Lemhöfer, K., & Broersma, M. (2012). Introducing LexTALE: A quick and valid lexical test for advanced learners of English. Behavior research methods, 44(2), 325-343”. In each trial, a word is presented for a short period of time (determined by hide_after) and the participant must decide whether the word is an existing word in English or it does not exist. The words are chosen from the original study, which used and validated highly unfrequent words in English to make the task very difficult for non-native English speakers. See the documentation for further details.

Parameters:
  • label (string) – The label for the LexTale test, default: “lextale_test”.

  • time_estimate_per_trial (float) – The time estimate in seconds per trial, default: 2.0.

  • performance_threshold (int) – The performance threshold, default: 8.

  • media_url (str) – Location of the media resources, default: “https://s3.amazonaws.com/lextale-test-materials

  • hide_after (float) – The time in seconds after the word disappears, default: 1.0.

  • n_trials (float) – The total number of trials to display, default: 12.

  • trial_class – Trial class to use, default: LextaleTrial

REPPMarkersTest

class psynet.prescreen.REPPMarkersTest(label='repp_markers_test', performance_threshold=0.6, materials_url='https://s3.amazonaws.com/repp-materials', n_trials=3, time_estimate_per_trial=12.0, trial_class=<class 'psynet.prescreen.RecordMarkersTrial'>)[source]

This markers test is used to determine whether participants are using hardware and software that meets the technical requirements of REPP, such as malfunctioning speakers or microphones, or the use of strong noise-cancellation technologies. To make the most out of it, the markers check should be used at the beginning of the experiment, after providing general instructions with the technical requirements of the experiment. In each trial, the markers check plays a test stimulus with six marker sounds. The stimulus is then recorded with the laptop’s microphone and analyzed using the REPP’s signal processing pipeline. During the marker playback time, participants are supposed to remain silent (not respond).

Parameters:
  • label (string) – The label for the markers check, default: “repp_markers_test”.

  • performance_threshold (int) – The performance threshold, default: 1.

  • materials_url (string) – The location of the REPP materials, default: https://s3.amazonaws.com/repp-materials.

  • n_trials (int) – The total number of trials to display, default: 3.

  • time_estimate_per_trial (float) – The time estimate in seconds per trial, default: 12.0.

  • trial_class – The trial class to use, default: RecordMarkersTrial

REPPTappingCalibration

class psynet.prescreen.REPPTappingCalibration(label='repp_tapping_calibration', time_estimate_per_trial=10.0, min_time_before_submitting=5.0, materials_url='https://s3.amazonaws.com/repp-materials')[source]

This is a tapping calibration test to be used when implementing SMS experiments with REPP. It is also containing the main instructions about how to tap using this technology.

Parameters:
  • label (string) – The label for the REPPTappingCalibration test, default: “repp_tapping_calibration”.

  • time_estimate_per_trial (float) – The time estimate in seconds per trial, default: 10.0.

  • min_time_before_submitting (float) – Minimum time to wait (in seconds) while the music plays and the participant cannot submit a response, default: 5.0.

  • materials_url (string) – The location of the REPP materials, default: https://s3.amazonaws.com/repp-materials.

class AudioMeter(calibrate=False, show_next_button=True, min_time=0.0, bot_response=<class 'psynet.utils.NoArgumentProvided'>, **kwargs)[source]

REPPVolumeCalibrationMarkers

class psynet.prescreen.REPPVolumeCalibrationMarkers(label='repp_volume_calibration_markers', materials_url='https://s3.amazonaws.com/repp-materials', min_time_on_calibration_page=5.0, time_estimate_for_calibration_page=10.0)[source]

This is a volume calibration test to be used when implementing SMS experiments with metronome sounds and REPP. It contains a page with general technical requirements of REPP and it then plays a metronome sound to help participants find the right volume to use REPP.

Parameters:
  • label (string) – The label for the REPPVolumeCalibration test, default: “repp_volume_calibration_music”.

  • materials_url (string) – The location of the REPP materials, default: https://s3.amazonaws.com/repp-materials.

  • min_time_on_calibration_page (float) – Minimum time (in seconds) that the participant must spend on the calibration page, default: 5.0.

  • time_estimate_for_calibration_page (float) – The time estimate for the calibration page, default: 10.0.

class AudioMeter(calibrate=False, show_next_button=True, min_time=0.0, bot_response=<class 'psynet.utils.NoArgumentProvided'>, **kwargs)[source]

REPPVolumeCalibrationMusic

class psynet.prescreen.REPPVolumeCalibrationMusic(label='repp_volume_calibration_music', materials_url='https://s3.amazonaws.com/repp-materials', min_time_on_calibration_page=5.0, time_estimate_for_calibration_page=10.0)[source]

This is a volume calibration test to be used when implementing SMS experiments with music stimuli and REPP. It contains a page with general technical requirements of REPP and a volume calibration test with a visual sound meter and stimulus customized to help participants find the right volume to use REPP.

Parameters:
  • label (string) – The label for the REPPVolumeCalibration test, default: “repp_volume_calibration_music”.

  • materials_url (string) – The location of the REPP materials, default: https://s3.amazonaws.com/repp-materials.

  • min_time_on_calibration_page (float) – Minimum time (in seconds) that the participant must spend on the calibration page, default: 5.0.

  • time_estimate_for_calibration_page (float) – The time estimate for the calibration page, default: 10.0.

class AudioMeter(calibrate=False, show_next_button=True, min_time=0.0, bot_response=<class 'psynet.utils.NoArgumentProvided'>, **kwargs)[source]