An introduction to a survey bot detection algorithm

Part 2: Bot detection algorithm

Carl F. Falk & Michael J. Ilagan

Workshop Outline

  • Part 1: Bots and detection preliminaries
    • Lecture (\(\approx\) 30 min)
    • Shiny Apps (\(\approx\) 10 min)
  • Part 2: Bot detection algorithm
    • Lecture (\(\approx\) 30 min)
    • R (\(\approx\) 15 min)

Workshop Materials

R packages to install

  • We assume you are familiar with R
  • detranli = DEtection of RANdom LIkert-type responses
  • Install from source? Say no for now
# Used for visualizations, installing the detranli package
install.packages(c("ggplot2","GGally"), type="binary")
install.packages("devtools", type="binary")

# stable version (recommended)
devtools::install_github("michaeljohnilagan/detranli")

# experimental version
#devtools::install_github("falkcarl/detranli", type="binary")

# some example datasets
install.packages(c("psychTools","qgraph"), type="binary")

Outline for Part 2

  1. Ilagan & Falk’s (in press) Algorithm
  1. R code (detranli_demo.R)
  1. Algorithm Performance
  1. Loose ends/FAQ

Ilagan & Falk’s (in press) algorithm

Ingredients and output

1. A method to generate synthetic bots (🤖)

2. De-biased NRIs to quantify suspiciousness

3. \(p\)-values used to obtain desired sensitivity

Null hypothesis significance testing

  • Distribution of test statistic under \(H_0\) (null hypothesis)
  • If observed statistic is far in the tail, reject \(H_0\)
  • One-tailed if strong theory
    • Quantify suspiciousness with some statistic, higher = more suspicious

Specificity calibration

  • Specificity: Proportion of 👶 spared 👍
  • If we have a sample of representative 👶 …
  • We can calibrate specificity:
    • For \(95\%\) specificity, find cutoff so that \(5\%\) in right tail

Sensitivity calibration

  • Sensitivity: Proportion of 🤖 flagged 🚩
  • If we have a sample of representative 🤖 …
  • We can calibrate sensitivity
    • For \(95\%\) sensitivity, find cutoff so that \(5\%\) in left tail

Classification Accuracy

  • Proportion of all cases correctly classified
    • 🤖 as 🚩, 👶 as 👍
  • Best cutoff often where distributions intersect

Changes with proportion of 🤖 in sample

Optimal cut-offs usually don’t generalize from sample to sample

Classification and Machine Learning

  • Supervised learning
    • (Many) known exemplars of 👶 and 🤖
    • Representative of sample, same proportions1
    • Can maximize accuracy
      • Example: Logistic regression, NRI \(\to\) (👶 or 🤖)2
      • Optimal coefficients used to classify new observations
  • Unsupervised learning
    • No known exemplars (esp. 👶)

Summary of “hypothesis testing”

  • If we have (lots of) known 👶 and 🤖
    • Maximize Classification Accuracy

Summary of “hypothesis testing”

  • If we have (lots of) known 👶 and 🤖
    • Maximize Classification Accuracy
  • If we have (lots of) known 👶
    • Calibrate specificity

Summary of “hypothesis testing”

  • If we have (lots of) known 👶 and 🤖
    • Maximize Classification Accuracy
  • If we have (lots of) known 👶
    • Calibrate specificity
  • If we have (lots of) known 🤖
    • Calibrate sensitivity

Summary of “hypothesis testing”

  • If we have (lots of) known 👶 and 🤖
    • Maximize Classification Accuracy
  • If we have (lots of) known 👶
    • Calibrate specificity
  • If we have (lots of) known 🤖
    • Calibrate sensitivity

If we have lots of known 👶, why are we doing the study again? 🤔

Summary of “hypothesis testing”

  • If we have (lots of) known 👶 and 🤖
    • Maximize Classification Accuracy
  • If we have (lots of) known 👶
    • Calibrate specificity
  • If we have (lots of) known 🤖
    • Calibrate sensitivity

Consider no known 👶

Summary of “hypothesis testing”

  • If we have (lots of) known 🤖
    • Calibrate sensitivity
    • e.g., flag 95% of 🤖

What if we don’t have known 🤖?

How can we generate a 🤖? 🤔

For each item, generate a response…

  • From uniform distribution
    • Each category has equal probability
  • Based on binomial distribution
    • 4 fair coin flips (0/1), sum result +1
  • How do we know what way is representative of our 🤖? (🖕 ?)

Item exchangeability assumption

Response options: (Unable to do) 1 2 3 4 5 (Without any difficulty)

  1. Are you able to run or jog for two miles (3km)?
  1. Are you able to stand for short periods of time?

For 👶, probability of each category depends on the item

For 🤖, response process is irrespective of the item

For 🤖, response process is irrespective of the item

For 🤖, response process is irrespective of the item

Items are exchangeable for 🤖

Consider just one row

ID Item 1 Item 2 Item 3 Item 4 Item 5
1 4 2 3 4 5
2 1 1 1 1 3
3 5 4 2 4 3
4 5 2 5 5 5
5 1 2 2 3 4

Permutations to create synthetic bots

Original responses for one row

4

2

3

4

5

Permutations to create synthetic bots

Original responses for one row (and permutations)

4

2

3

4

5

2

4

4

3

5

Permutations to create synthetic bots

Original responses for one row (and permutations)

4

2

3

4

5

2

4

4

3

5

5

4

3

2

4

Permutations to create synthetic bots

Original responses for one row (and permutations)

4

2

3

4

5 👶 or 🤖

2

4

4

3

5 🤖

5

4

3

2

4 🤖

5

2

4

3

4 🤖

4

4

2

5

3 🤖

4

3

5

2

4 🤖

4

4

3

5

2 🤖

Many times to create synthetic bots for this original row

Compare row to its synthetic bots

ID Item 1 Item 2 Item 3 Item 4 Item 5 Synth. Bots
1 4 2 3 4 5 \(\to\) 🤖 🤖 🤖 🤖 🤖 🤖 🤖 🤖 🤖 🤖 🤖 🤖 🤖 🤖 🤖 …
  • Need fair way to compare each row vs its synthetic bots

Ingredients and output

  1. A method to generate synthetic bots (🤖) ✔️

2. De-biased NRIs to quantify suspiciousness

  1. \(p\)-values used to obtain desired sensitivity

Requirements for NRIs

  1. Computed only from Likert-type items
  2. Ideal point for least-suspicious value
NRI Ideal Point Good Perf. No Other Reqs.
Mahalanobis Distance 0 ✔️ ✔️
PT Correlation +1 ✔️ ✔️
PT Cosine Similarity +1 ? ✔️
Longstring ? ✔️
Intra-indiv. response var. ? ✔️
Functional method +1 ✔️
Psy. Antonyms/Synonyms -1/+1
  • Promise in prior work (e.g., Dupuis et al., 2018)
  • Don’t require knowledge of factor structure, reverse-coded items

Example of bias

ID Item 1 Item 2 Item 3 Item 4 Item 5
1 5 3 1 4 2
2 1 1 1 1 3
3 4 2 3 4 5
4 5 2 5 5 5
5 1 2 2 3 4
Item 1 Item 2 Item 3 Item 4 Item 5
Means 3.2 2.0 2.4 3.4 3.8
Item 1 Item 2 Item 3 Item 4 Item 5
Means 2.75 1.75 2.75 3.25 4.25

Consider PTC computation for Row 1

Row 1 included in computation of means…. PTC \(\approx .26\) (👍?)

Inclusion of Row 1 makes it look less suspicious

Row 1 NOT included in computation of means…. PTC \(\approx -.17\) (👎?)

leave-one-out

Computation of NRIs

  • Row \(i\) (👶 or 🤖) ↔︎️ its synthetic bots (🤖🤖 🤖 🤖 🤖 🤖 🤖 🤖 …)
    • Leave-one-out
      • For row \(i\), remaining \(N-1\) observations to compute means, covariances for NRIs
    • Neither row nor synthetic bots have an advantage in looking more/less suspicious

Example rows vs. their synthetic bots

Ingredients and output

  1. A method to generate synthetic bots (🤖) ✔️

  2. De-biased NRIs to quantify suspiciousness ✔️

  3. \(p\)-values used to obtain desired sensitivity

NRIs \(\to\) one-dimensional space

  • Distance for each NRI point to ideal point (0, 1)

\(p\)-value

  • Distance for each NRI point to ideal point (0, 1)

Classification

  • \(p\)-values
    • Under null that the row is a bot
    • Unique to each row of the dataset
  • For .95 (95%) sensitivity…
    • \(p < .05\) \(\to\) 👍 (i.e., predict 👶)
    • \(p \ge .05\) \(\to\) 🚩 (i.e., predict 🤖)
ID Item 1 Item 2 Item 3 Item 4 Item 5 \(p\)-value Decision
1 4 2 3 4 5 .62 🚩
2 1 1 1 1 3 .39 🚩
3 5 4 2 4 3 .02 👍
4 5 2 5 5 5 .42 🚩
5 1 2 2 3 4 .33 🚩

Ingredients and output

  1. A method to generate synthetic bots (🤖) ✔️

  2. De-biased NRIs to quantify suspiciousness ✔️

  3. \(p\)-values used to obtain desired sensitivity ✔️

R code

Demonstration data

  • See R script: detranli_demo.R https://osf.io/vnuew/
  • Load package and example dataset
    • detranli assumes items coded as integers, starting at 1 (1, 2, 3, 4, 5)
library(detranli)

data("cnrexample1")
item01 item02 item03 item04 item05 item06 item07 item08 item09 item10 item11 item12 item13 item14 item15 item16 item17 item18
1 1 1 1 3 1 1 1 1 1 3 2 4 2 3 5 5 5
4 2 3 4 5 4 2 2 1 1 2 1 2 1 2 3 5 4
5 2 5 5 5 5 5 3 3 5 5 4 3 1 1 1 3 4
1 2 2 3 4 3 4 4 3 5 5 2 5 2 4 5 4 5
1 1 1 2 4 4 3 2 1 4 5 2 4 3 5 5 5 5
5 5 5 5 5 5 4 2 2 3 5 4 3 4 3 5 5 5
5 3 5 5 5 4 3 4 3 5 4 3 3 5 3 5 5 4
3 1 2 4 4 3 3 1 1 4 4 4 5 4 4 5 5 5
1 1 2 5 5 4 5 2 1 2 4 4 5 2 3 5 4 4
4 5 5 5 5 5 4 3 1 4 5 4 5 5 5 5 5 5
3 1 1 3 3 4 5 4 5 5 5 5 5 5 5 5 5 5
5 1 2 4 3 5 1 1 1 3 5 4 4 4 3 5 4 5
1 1 2 3 4 3 5 5 2 5 5 5 2 2 1 2 5 1
4 5 2 5 5 5 5 2 4 5 5 4 4 5 5 5 5 4
3 1 2 4 5 4 4 1 1 4 5 5 5 2 3 5 5 5
3 4 4 4 4 4 5 2 3 5 3 3 2 1 1 4 2 3
3 1 2 4 5 4 2 1 1 3 4 2 5 5 4 5 5 5
3 2 2 5 4 4 2 1 1 4 4 2 1 1 1 1 2 1
3 1 3 3 4 4 4 1 4 5 3 4 5 3 4 5 5 5
4 3 3 5 5 4 1 1 1 4 3 4 4 5 1 5 5 3
5 5 5 5 4 4 5 5 5 5 5 5 4 2 5 5 5 5
2 1 1 3 5 1 5 2 5 5 5 5 2 1 1 4 5 4
4 2 1 4 5 4 2 1 1 4 4 3 4 4 5 5 5 5
5 4 4 5 5 5 1 1 1 3 4 4 2 1 2 5 5 4
3 1 3 5 4 4 3 1 3 5 4 4 3 3 4 5 3 4
5 3 5 5 5 5 5 1 3 5 5 5 5 3 4 4 5 5
5 4 5 5 5 5 5 5 4 5 5 5 5 3 5 5 5 4
4 5 5 5 5 5 5 4 4 5 5 4 4 5 5 5 5 5
4 4 3 5 5 4 5 5 5 5 5 5 3 2 4 5 5 4
3 2 2 5 4 4 2 2 1 5 5 3 2 2 3 5 4 2
4 2 3 5 5 5 4 1 3 4 4 3 4 1 1 3 5 3
2 2 4 5 5 3 2 1 1 4 4 3 5 2 3 5 5 5
5 5 5 5 5 4 3 1 2 5 5 4 3 5 3 5 5 4
4 2 2 5 5 4 1 1 1 1 1 1 4 2 2 5 5 4
5 2 1 3 5 4 3 1 3 5 5 4 1 1 1 1 2 1
3 1 3 4 5 5 3 5 2 5 5 2 4 1 1 5 4 4
5 5 4 5 5 5 5 2 2 5 5 4 3 2 4 5 5 4
2 2 2 5 5 2 3 1 2 5 5 3 3 2 1 4 4 4
1 1 4 4 5 4 3 1 1 3 5 1 4 2 4 5 3 5
4 2 1 5 5 5 2 1 1 2 5 3 2 1 3 3 5 4
3 1 1 3 5 3 2 5 2 5 5 4 4 1 1 5 5 4
5 2 3 5 5 5 2 1 1 5 5 4 4 1 5 5 5 4
4 1 2 3 4 4 4 1 4 4 5 4 4 5 3 5 5 4
4 2 3 4 4 3 2 1 1 3 2 3 1 2 1 3 3 4
2 1 3 4 4 5 4 4 3 5 5 4 5 2 2 5 5 4
4 1 2 5 5 5 5 3 3 5 5 4 5 2 3 5 4 2
1 1 1 1 2 1 5 1 4 5 5 4 2 1 3 3 4 2
4 1 2 5 4 4 4 3 5 5 5 4 1 1 1 2 5 3
2 1 3 3 5 4 2 1 2 2 4 4 3 1 1 4 5 4
2 1 2 4 4 3 4 1 2 3 5 4 2 1 2 4 4 1
3 1 2 1 4 3 2 1 1 2 2 2 2 5 2 5 5 5
5 1 1 3 4 4 1 1 1 1 1 2 2 1 2 3 5 3
3 2 1 3 5 4 1 2 1 1 3 3 1 1 1 1 5 1
5 3 2 5 4 4 5 5 5 5 5 5 3 1 3 3 5 4
5 3 5 5 5 4 5 2 3 5 4 2 5 5 5 5 5 5
2 1 4 2 5 4 1 1 1 2 2 3 2 1 1 4 4 2
5 5 3 5 5 5 2 1 2 5 4 4 5 2 3 5 5 5
2 1 2 5 5 4 2 3 3 5 5 4 4 4 3 4 5 5
4 5 2 5 5 5 4 1 1 4 5 5 3 2 4 4 5 5
3 3 2 5 5 5 3 1 2 5 4 4 4 3 3 5 5 4
3 1 3 5 5 5 4 2 4 5 5 4 3 1 2 5 5 5
2 1 1 1 3 3 3 1 1 3 5 3 5 5 5 5 5 5
2 2 2 2 2 2 2 1 1 1 3 2 3 1 1 3 3 4
3 1 2 3 5 3 1 1 2 3 1 3 1 1 1 2 2 2
5 4 4 5 5 5 4 1 2 4 5 2 1 1 1 5 5 5
3 1 3 2 3 4 2 1 1 1 5 2 3 1 1 4 3 3
2 2 1 4 5 4 3 1 1 2 5 4 2 1 1 3 3 3
3 1 1 3 4 2 1 1 1 2 5 3 1 1 1 1 3 1
4 2 4 4 5 4 5 4 5 5 5 5 5 1 2 5 5 4
4 1 2 3 4 3 3 1 2 2 5 2 3 1 2 3 5 3
2 1 2 3 5 2 4 1 2 5 5 3 2 1 1 2 2 1
4 1 1 3 5 4 1 2 1 2 4 4 2 2 2 5 4 3
1 2 1 3 2 4 5 5 3 5 5 5 1 1 1 3 1 2
3 1 2 3 4 3 1 1 1 1 3 1 1 1 2 3 2 4
2 1 1 5 5 2 4 2 1 5 4 4 3 1 4 2 3 2
2 4 2 2 3 4 1 1 3 2 4 2 1 3 4 1 3 1
3 2 1 3 4 5 5 5 2 4 1 1 3 1 1 5 1 1
3 4 2 3 3 4 1 5 1 1 1 4 2 4 1 1 1 2
3 5 3 3 2 1 3 2 1 4 2 5 3 1 2 3 3 4
2 3 1 4 3 5 4 5 1 2 3 1 3 4 2 1 5 1
3 3 5 4 2 2 4 2 4 3 1 1 4 5 2 4 1 4
3 1 3 3 2 4 1 4 3 2 3 4 2 1 2 4 1 3
1 3 2 5 3 3 1 2 1 1 4 1 2 1 5 1 1 3
1 5 1 1 3 1 5 1 1 4 5 5 3 5 3 3 3 2
1 2 4 2 2 1 2 1 4 3 4 3 4 3 3 4 5 3
4 2 2 1 5 5 1 3 2 2 2 1 4 1 4 3 2 4
1 2 1 2 3 3 2 2 4 1 3 1 2 4 1 2 2 2
2 2 4 5 4 1 2 1 1 1 4 1 3 4 5 5 4 3
2 3 2 4 3 5 3 2 5 5 5 3 2 5 5 4 5 5
3 2 5 1 4 3 4 1 5 3 3 5 4 2 4 4 4 1
3 5 4 5 4 2 4 2 2 4 2 1 4 4 4 1 3 4
5 3 1 5 3 1 2 4 2 2 2 3 1 4 3 2 3 5
5 3 4 3 5 1 4 3 2 2 2 1 3 5 1 1 2 3
2 5 2 1 4 5 1 4 4 3 5 3 3 5 3 4 5 4
3 3 2 4 2 5 4 4 3 5 1 3 2 3 1 1 2 1
2 3 3 2 1 4 4 4 4 2 4 3 4 1 2 2 3 4
4 4 2 5 3 5 2 1 5 3 3 2 3 4 3 1 2 3
5 4 4 4 2 4 2 4 2 1 4 5 1 3 2 2 5 1
2 1 1 3 5 2 2 1 1 2 4 1 4 5 4 5 5 1
5 2 3 3 2 2 4 2 3 2 1 2 5 3 3 1 1 3
4 1 5 3 2 5 2 2 4 4 5 2 3 5 4 4 3 4
5 2 5 4 5 1 2 1 3 4 1 4 5 1 5 2 4 2
4 2 3 3 4 3 2 4 3 4 2 4 1 5 2 2 2 5
2 5 2 2 3 1 3 5 1 4 5 1 5 2 3 1 1 4
5 1 2 5 2 5 3 3 5 3 2 3 4 1 2 2 1 1
1 5 5 5 4 2 2 2 1 1 3 4 3 1 3 4 1 4
2 1 1 5 5 4 1 5 1 2 4 3 4 5 2 5 4 4
2 3 4 1 3 5 5 1 4 3 2 2 5 1 5 3 4 4
3 4 2 4 1 2 3 1 4 3 2 2 2 2 3 4 2 4
3 1 5 2 4 2 3 3 3 3 3 5 3 1 2 3 3 1
1 5 2 3 5 1 5 5 2 2 3 4 2 1 5 5 2 1
1 4 3 4 5 4 4 5 1 3 3 5 2 3 2 1 2 2
1 3 5 3 4 5 1 4 1 1 4 2 4 2 5 5 5 1
3 1 5 1 3 2 5 4 5 2 3 3 4 3 5 2 4 4
4 4 3 4 5 2 1 3 2 2 1 2 3 4 1 5 1 5
1 1 3 1 1 2 5 1 1 4 5 5 1 5 2 4 5 4
4 1 2 3 2 2 4 1 5 4 5 3 5 1 2 4 5 3
5 4 2 1 2 2 3 1 1 1 1 4 2 5 4 5 2 1
4 2 4 5 2 1 5 3 2 3 4 4 5 3 4 1 5 2
2 2 2 2 2 3 3 3 2 3 1 5 1 3 2 4 1 5
4 5 2 3 1 5 1 2 2 3 3 4 2 1 1 3 1 1
5 5 3 4 3 3 2 2 2 4 4 2 5 3 4 1 3 5
5 2 5 3 3 4 4 4 4 1 5 5 4 4 5 4 5 2
2 4 2 3 1 5 3 5 3 2 4 1 3 1 3 3 2 4
1 1 5 3 2 2 3 2 1 4 1 1 3 5 2 2 2 1
5 1 1 4 3 1 4 4 3 1 5 2 3 1 4 2 3 2
2 2 4 2 1 5 1 4 5 3 4 2 5 5 4 4 5 3
2 4 2 3 1 3 4 3 4 3 3 2 2 1 1 3 5 2
4 2 2 3 2 5 3 5 1 1 1 4 3 2 4 1 1 3
3 3 4 5 4 5 2 5 5 4 1 2 2 4 3 1 1 4
3 1 2 2 2 3 3 5 2 4 2 5 4 2 4 4 5 1
3 1 4 3 1 5 5 3 1 4 2 4 1 2 2 4 2 4
1 3 3 5 2 4 5 3 3 3 5 1 3 3 4 1 3 4
4 5 4 3 3 2 5 1 1 4 3 1 2 4 5 3 5 4
3 4 4 4 4 5 1 4 3 1 1 4 4 5 1 3 5 2
5 4 4 3 3 3 1 4 3 4 1 2 3 2 3 1 4 4
3 2 3 2 1 1 4 4 4 5 2 3 1 5 1 2 2 2
3 3 5 4 2 3 3 4 2 5 1 2 4 2 4 3 4 2
3 5 5 3 2 1 3 3 4 3 3 2 4 2 5 3 5 3
2 2 3 4 4 5 3 1 1 2 2 1 5 1 3 2 3 1
2 4 1 3 4 2 5 2 3 1 2 3 4 4 1 2 1 2
5 1 5 3 2 2 2 4 4 4 3 5 5 5 3 5 2 3
3 1 4 4 4 2 4 1 1 1 3 5 5 4 4 2 3 5
3 5 1 2 5 3 1 5 5 3 4 1 2 4 4 4 4 3
5 5 4 3 1 5 5 3 3 5 1 2 5 3 1 3 3 4
5 1 1 1 2 4 2 4 5 3 2 3 2 5 5 2 4 3
4 1 4 4 2 2 4 4 3 2 4 3 1 2 1 2 5 5
3 4 3 4 5 3 1 1 1 3 4 3 1 3 4 2 4 1
5 2 2 3 2 3 2 1 2 4 5 2 5 4 3 1 3 4
3 1 1 2 2 1 2 4 2 3 3 4 5 4 3 5 2 4
1 1 2 5 1 3 1 2 3 5 2 2 2 4 2 4 3 1
3 5 3 1 2 5 5 5 3 4 2 2 3 4 3 4 1 4
1 2 2 1 2 2 1 2 4 1 5 2 5 3 2 1 2 4
2 1 5 4 5 1 2 5 1 2 5 2 1 4 4 3 1 1
5 5 3 3 1 1 5 1 5 4 4 1 2 3 3 3 5 5
4 4 2 2 3 3 1 1 3 2 2 5 1 3 2 2 5 2
5 1 3 2 5 3 4 4 4 1 3 3 1 1 1 5 5 3
1 2 2 4 2 5 5 1 2 4 5 5 1 3 4 3 4 4
4 1 4 5 5 4 1 1 2 5 5 4 3 3 5 4 4 3
2 3 3 4 5 2 5 3 4 4 3 4 4 2 3 3 4 2
5 4 4 3 4 5 3 1 2 3 5 4 5 1 2 1 5 1
5 1 1 2 4 3 1 2 2 3 1 1 2 3 5 5 3 3
2 5 1 1 2 2 1 2 1 3 4 1 3 1 5 1 3 1
4 3 1 5 3 1 4 5 3 4 4 1 3 1 4 1 5 5
5 3 2 5 5 3 3 4 4 2 4 4 5 2 2 4 5 4
2 2 2 5 5 5 2 2 1 3 5 2 3 2 5 3 1 5
2 3 3 5 5 3 3 1 3 4 4 1 2 2 4 5 5 5
2 5 4 1 3 5 4 1 5 5 1 5 2 3 4 2 4 1
1 3 5 1 3 4 2 5 3 4 4 4 3 5 4 5 3 2
5 5 2 4 5 1 3 4 4 5 1 1 5 3 5 1 2 1
3 1 3 4 1 5 5 4 1 4 4 2 3 5 2 1 2 4
5 3 1 5 1 4 1 4 4 5 3 4 3 4 4 5 3 3
1 2 5 3 1 1 5 2 5 4 2 5 3 1 4 2 4 1
1 3 4 1 4 3 1 4 2 1 5 1 1 3 3 3 5 1
5 4 2 5 5 2 5 3 4 1 5 1 2 5 3 4 4 3
5 2 1 5 5 3 2 1 2 3 3 3 2 5 1 5 3 4
1 2 5 2 5 2 1 3 2 5 2 1 4 3 2 5 4 2
5 4 5 1 1 1 2 4 5 5 1 5 2 1 4 4 4 1
4 5 3 5 3 1 1 4 5 2 2 1 1 5 2 4 4 3
5 3 5 5 3 4 1 5 3 1 1 4 2 5 2 4 2 1
3 4 3 5 2 3 2 3 4 3 1 4 2 3 2 2 3 1
4 5 3 1 2 4 4 3 1 4 5 1 2 1 2 5 2 1
1 3 1 4 2 2 5 5 5 2 2 4 5 4 5 4 5 2
4 2 5 2 3 2 5 5 1 5 3 1 2 1 1 3 1 3
3 2 5 2 2 4 3 3 5 2 1 1 3 1 5 5 1 3
1 4 4 4 4 4 5 4 4 4 2 5 2 2 2 2 4 5
3 2 5 3 2 5 4 5 2 1 4 3 4 4 5 4 5 2
4 2 2 5 3 1 4 4 5 4 3 2 1 3 4 1 1 5
3 3 3 2 4 4 3 5 5 4 5 4 3 5 5 4 5 2
5 3 4 5 2 1 5 1 3 2 3 3 3 2 1 1 5 2
5 3 4 2 5 5 3 3 2 3 1 1 4 5 2 2 3 3
1 3 3 4 1 2 1 4 4 3 3 1 2 1 4 5 2 5
4 4 1 4 4 2 3 3 5 1 5 5 4 5 1 5 2 2
1 3 2 5 2 5 2 4 4 4 3 5 5 3 2 4 4 2
2 3 4 1 5 3 5 1 3 2 4 4 1 3 2 5 2 2
1 2 5 5 5 4 5 1 5 4 4 2 5 3 2 1 3 5
1 2 3 4 4 4 4 5 5 5 4 4 2 5 3 1 1 4
4 3 4 2 2 2 2 4 4 4 1 5 4 3 1 4 5 1
2 2 2 4 2 1 5 4 1 3 2 2 5 3 1 5 1 4
4 2 4 1 3 3 4 1 4 2 3 5 1 1 3 1 1 3
1 5 2 5 5 5 3 4 2 5 5 2 4 3 3 3 4 1
1 4 3 3 5 3 2 2 4 1 1 2 2 2 3 4 2 5
5 4 3 5 5 3 4 3 4 1 4 5 3 5 5 2 2 1
4 3 3 1 4 1 2 3 2 1 4 1 2 2 5 2 4 1
2 5 3 5 2 2 4 1 1 2 3 2 5 4 4 5 4 3
1 2 3 5 1 1 5 1 4 1 1 5 5 4 2 4 4 2
5 5 2 4 4 1 4 4 3 4 2 4 1 4 1 4 2 4
3 2 4 5 2 1 4 2 1 2 3 1 3 4 5 3 4 1
4 4 4 1 5 2 3 5 1 3 1 1 4 5 2 1 3 5
5 2 2 3 3 4 1 1 1 3 4 2 4 2 4 2 3 2
3 3 3 5 5 3 1 1 3 2 3 5 4 2 5 4 1 2
3 2 3 3 5 3 1 3 2 1 3 4 1 4 4 3 5 4
2 4 4 2 4 4 2 5 3 3 4 1 3 3 3 2 2 3
1 4 2 2 1 1 2 2 3 4 5 5 1 3 5 4 4 2
5 4 2 1 2 1 3 4 1 1 2 5 3 4 3 2 4 5
5 3 3 3 5 4 5 4 1 4 2 1 4 4 3 1 2 2
1 2 4 4 2 2 1 3 4 2 2 4 3 3 2 1 2 5
2 1 5 4 2 1 4 1 5 5 5 5 1 5 2 1 1 1
5 1 2 1 3 1 5 1 1 1 1 2 5 3 3 3 2 4
1 5 3 1 1 1 1 4 4 2 4 3 3 3 4 4 5 2
5 2 2 5 5 1 4 1 2 2 2 2 1 5 1 3 2 4
3 3 1 3 3 1 4 4 3 3 1 3 2 4 3 2 5 4
1 5 5 3 1 3 3 4 3 2 4 5 4 4 5 3 1 3
2 1 2 2 3 4 5 3 3 1 5 4 5 5 4 3 4 1
1 5 4 5 3 1 5 1 5 2 3 1 1 1 1 5 5 3
2 1 3 2 2 5 3 3 5 4 1 5 3 2 1 2 1 3
5 3 5 2 2 1 2 2 5 4 4 1 5 2 4 3 4 1
4 4 4 1 5 1 2 1 3 5 4 3 4 2 4 2 2 3
4 1 1 5 4 4 1 1 4 1 4 3 3 4 2 1 4 2
3 1 2 4 4 4 3 2 1 4 5 5 5 2 1 5 4 5
4 1 4 3 3 1 1 1 3 4 5 4 3 5 1 1 5 1
5 3 5 5 2 2 2 5 2 4 5 3 3 2 3 2 3 2
5 1 5 3 4 2 4 1 1 3 3 5 3 3 4 1 4 4
4 2 2 2 1 1 5 2 2 1 3 4 1 2 4 5 3 5
4 1 1 5 2 1 4 3 4 3 1 4 2 2 4 1 3 5
5 1 4 4 1 1 1 1 1 1 4 5 5 2 3 1 2 4
5 3 5 2 2 5 4 5 4 2 1 1 5 3 4 5 2 1
3 3 1 4 3 1 3 5 5 4 1 4 3 4 2 4 4 5
3 5 5 2 1 5 2 4 1 2 4 2 3 2 5 2 5 3
3 3 3 4 1 2 2 5 2 2 3 4 3 4 5 5 3 5
5 5 5 3 5 4 5 2 4 5 4 5 2 2 2 2 3 5
3 2 5 3 3 4 1 4 5 1 4 1 3 5 1 4 2 1
3 3 5 1 5 2 2 1 3 2 3 1 1 2 5 1 4 5
4 4 4 3 3 4 2 1 3 2 5 4 2 2 2 2 1 5
2 4 3 1 4 5 4 4 1 3 2 5 5 5 1 5 2 5
5 2 4 4 5 2 4 3 4 4 5 1 5 2 2 4 4 1
2 3 1 5 4 5 2 2 1 1 4 2 1 4 5 4 3 2
1 1 2 1 2 5 4 5 5 1 1 5 5 5 1 5 3 1
3 3 3 1 2 3 4 4 4 4 2 4 1 5 3 1 1 4
1 4 3 3 5 1 2 5 1 4 2 5 2 4 2 1 5 5
4 3 4 3 2 3 5 1 3 2 3 2 4 3 5 5 4 1
5 3 1 4 2 2 3 4 3 3 2 1 5 3 1 3 5 5
1 3 5 3 5 3 1 1 5 2 3 2 3 1 2 3 4 3
3 1 1 1 3 4 3 3 2 4 1 1 3 5 3 2 4 5
2 3 5 4 1 4 1 3 1 2 3 5 1 2 2 5 1 5
5 5 1 1 3 2 2 1 2 5 5 5 5 2 2 1 3 3
3 2 5 1 2 1 5 3 3 4 1 2 5 1 3 4 1 2
5 2 2 3 3 4 3 2 1 3 2 5 2 2 5 1 3 1
4 4 3 5 3 3 3 4 2 4 2 5 2 4 4 1 5 2
3 2 1 1 1 3 3 4 1 5 1 2 2 5 3 4 1 3
3 3 2 3 3 1 3 2 2 1 1 5 2 4 5 2 1 1
1 2 5 4 4 3 4 3 5 1 2 3 2 3 2 4 1 2
5 2 1 1 5 1 5 1 5 5 5 3 2 2 3 1 5 1
1 3 5 5 4 4 2 1 1 3 4 4 5 5 5 1 3 1
5 1 3 4 5 5 1 3 3 2 1 1 2 4 5 4 5 1
4 5 5 5 4 5 5 2 4 5 4 4 4 3 5 5 4 5
2 3 3 5 1 1 2 1 3 4 2 3 1 4 5 5 4 4
2 2 1 2 3 1 2 5 1 1 3 4 2 2 5 1 3 5
5 5 1 2 2 5 4 2 4 3 2 4 2 2 2 4 5 3
4 3 2 4 5 1 4 1 2 3 2 2 3 4 4 4 1 3
2 3 2 2 5 1 3 2 4 2 2 5 4 4 2 3 4 1
1 3 5 2 5 5 3 1 3 3 1 1 1 1 5 3 4 1
3 1 3 5 3 4 3 2 1 3 2 4 1 2 3 1 5 2
4 3 5 1 2 5 1 3 2 5 4 3 3 1 4 1 5 2
5 4 5 5 4 2 1 5 1 1 5 2 2 4 5 1 4 4
3 4 2 1 4 2 5 5 3 4 4 3 5 2 2 4 2 2
5 5 5 5 3 4 4 4 3 3 4 3 5 4 4 5 2 5
3 1 1 4 3 4 3 3 3 5 2 2 4 3 5 2 3 1
4 4 1 3 4 5 3 3 3 5 1 3 2 3 2 5 5 4
1 2 3 4 5 3 2 4 1 2 4 1 1 2 2 3 3 3
1 1 2 3 3 2 4 1 5 1 3 1 4 4 3 2 2 5
5 2 1 5 2 4 5 2 3 2 4 5 4 4 4 4 4 1
4 4 5 4 4 1 5 1 3 2 3 3 3 1 4 5 5 3
2 3 4 1 5 3 1 4 3 5 5 1 2 2 3 2 5 3
2 5 2 4 2 5 2 1 4 4 4 1 2 3 1 5 4 5
3 3 3 1 1 5 5 5 1 2 3 2 5 4 1 1 2 4
2 2 5 4 3 3 5 1 4 5 3 4 4 2 1 5 4 4
4 3 5 4 1 1 1 1 4 2 5 1 4 2 4 2 1 3
3 5 3 4 2 1 5 2 2 4 1 5 4 3 3 3 5 3
1 2 2 1 4 5 5 4 1 3 1 2 3 2 4 3 3 2
4 1 3 4 3 3 5 2 4 3 5 3 1 1 3 1 2 1
2 2 5 5 2 5 2 1 2 5 4 4 2 5 4 1 2 3
2 2 2 4 5 3 2 3 4 3 4 5 2 3 3 4 4 3
5 4 1 5 1 1 5 1 5 3 4 2 2 4 4 3 5 2
2 2 5 5 1 5 2 5 5 1 1 4 2 3 3 3 2 1
4 4 3 2 5 5 3 2 1 5 2 2 1 5 2 2 2 3
2 5 3 3 5 4 5 2 5 5 3 1 5 3 2 1 3 4
1 2 3 2 3 4 4 1 3 3 1 2 3 1 1 3 3 3
4 5 3 1 5 2 2 5 4 3 3 5 1 2 4 2 3 3
5 5 4 1 3 2 4 3 2 5 3 1 3 2 5 5 2 2
psych::describe(cnrexample1)
       vars   n mean   sd median trimmed  mad min max range  skew kurtosis   se
item01    1 300 3.14 1.40    3.0    3.18 1.48   1   5     4 -0.11    -1.26 0.08
item02    2 300 2.69 1.39    2.5    2.62 2.22   1   5     4  0.33    -1.16 0.08
item03    3 300 2.94 1.37    3.0    2.92 1.48   1   5     4  0.16    -1.22 0.08
item04    4 300 3.35 1.40    3.0    3.44 1.48   1   5     4 -0.34    -1.14 0.08
item05    5 300 3.41 1.39    4.0    3.51 1.48   1   5     4 -0.28    -1.26 0.08
item06    6 300 3.16 1.44    3.0    3.20 1.48   1   5     4 -0.20    -1.32 0.08
item07    7 300 3.06 1.42    3.0    3.08 1.48   1   5     4 -0.03    -1.34 0.08
item08    8 300 2.63 1.48    2.0    2.53 1.48   1   5     4  0.29    -1.39 0.09
item09    9 300 2.70 1.41    3.0    2.63 1.48   1   5     4  0.23    -1.27 0.08
item10   10 300 3.16 1.40    3.0    3.20 1.48   1   5     4 -0.14    -1.27 0.08
item11   11 300 3.29 1.43    3.5    3.36 2.22   1   5     4 -0.29    -1.26 0.08
item12   12 300 3.06 1.42    3.0    3.07 1.48   1   5     4 -0.09    -1.34 0.08
item13   13 300 3.03 1.35    3.0    3.03 1.48   1   5     4  0.04    -1.20 0.08
item14   14 300 2.84 1.43    3.0    2.80 1.48   1   5     4  0.17    -1.31 0.08
item15   15 300 2.96 1.39    3.0    2.95 1.48   1   5     4  0.04    -1.27 0.08
item16   16 300 3.20 1.49    3.0    3.25 1.48   1   5     4 -0.20    -1.40 0.09
item17   17 300 3.43 1.42    4.0    3.53 1.48   1   5     4 -0.36    -1.24 0.08
item18   18 300 3.09 1.43    3.0    3.11 1.48   1   5     4 -0.14    -1.33 0.08
  • Here, NOT using leave-one-out
#MD2 <- careless::mahad(cnrexample1, plot=FALSE) # using careless package

# using detranli package
MD <- mahal(cnrexample1, mu = colMeans(cnrexample1), sig = cov(cnrexample1))
hist(MD)

  • Here, NOT using leave-one-out
PTC <- ptcor(cnrexample1, mu = colMeans(cnrexample1))
hist(PTC)

  • Here, NOT using leave-one-out
PTCOSSIM <- ptcossim(cnrexample1, mu = colMeans(cnrexample1))
hist(PTCOSSIM)

library(GGally)
ggpairs(data.frame(MD=MD, PTC=PTC, PTCOSSIM = PTCOSSIM))

\(p\)-values, create flags

set.seed(1234) # make results replicable
pvals <- cnrdetect(cnrexample1,
                   pointscales=rep(5,18),
                   feat_funs = c("mahal","ptcor"),
                   feat_idvals = c(0, 1))

flags = ifelse(pvals < .05, "human", "bot")
pvals flags
0.005 human
0.002 human
0.008 human
0.012 human
0.002 human
0.127 bot
0.571 bot
0.001 human
0.003 human
0.030 human
0.023 human
0.002 human
0.045 human
0.197 bot
0.001 human
0.015 human
0.002 human
0.001 human
0.007 human
0.022 human
0.252 bot
0.001 human
0.001 human
0.001 human
0.015 human
0.002 human
0.302 bot
0.234 bot
0.131 bot
0.002 human
0.001 human
0.001 human
0.042 human
0.001 human
0.007 human
0.010 human
0.001 human
0.001 human
0.002 human
0.001 human
0.015 human
0.002 human
0.009 human
0.004 human
0.001 human
0.002 human
0.007 human
0.003 human
0.001 human
0.001 human
0.019 human
0.005 human
0.001 human
0.036 human
0.015 human
0.002 human
0.004 human
0.001 human
0.002 human
0.001 human
0.001 human
0.003 human
0.248 bot
0.009 human
0.002 human
0.011 human
0.001 human
0.002 human
0.005 human
0.001 human
0.004 human
0.001 human
0.036 human
0.001 human
0.052 bot
0.390 bot
0.463 bot
0.024 human
0.879 bot
0.943 bot
0.117 bot
0.669 bot
0.317 bot
0.835 bot
0.139 bot
0.479 bot
0.861 bot
0.009 human
0.479 bot
0.228 bot
0.187 bot
0.080 bot
0.372 bot
0.801 bot
0.119 bot
0.888 bot
0.911 bot
0.557 bot
0.005 human
0.125 bot
0.936 bot
0.712 bot
0.485 bot
0.919 bot
0.568 bot
0.796 bot
0.007 human
0.798 bot
0.904 bot
0.232 bot
0.985 bot
0.404 bot
0.031 human
0.806 bot
0.211 bot
0.783 bot
0.011 human
0.690 bot
0.396 bot
0.470 bot
0.969 bot
0.311 bot
0.875 bot
0.873 bot
0.982 bot
0.694 bot
0.425 bot
0.887 bot
0.565 bot
0.307 bot
0.503 bot
0.889 bot
0.788 bot
0.336 bot
0.723 bot
0.501 bot
0.263 bot
0.954 bot
0.083 bot
0.172 bot
0.965 bot
0.871 bot
0.266 bot
0.775 bot
0.374 bot
0.793 bot
0.976 bot
0.005 human
0.567 bot
0.404 bot
0.999 bot
0.107 bot
0.999 bot
0.964 bot
0.378 bot
0.314 bot
0.779 bot
0.289 bot
0.011 human
0.222 bot
0.146 bot
0.469 bot
0.732 bot
0.743 bot
0.661 bot
0.058 bot
0.006 human
0.819 bot
0.302 bot
0.823 bot
0.892 bot
0.953 bot
0.044 human
0.369 bot
0.952 bot
0.003 human
0.919 bot
0.419 bot
0.401 bot
0.097 bot
0.329 bot
0.989 bot
0.365 bot
0.889 bot
0.775 bot
0.714 bot
0.764 bot
0.614 bot
0.446 bot
0.304 bot
0.241 bot
0.936 bot
1.000 bot
0.792 bot
0.984 bot
0.595 bot
0.427 bot
0.710 bot
0.540 bot
0.809 bot
0.532 bot
0.940 bot
0.896 bot
0.078 bot
0.422 bot
0.525 bot
0.271 bot
0.552 bot
0.485 bot
0.029 human
0.981 bot
0.120 bot
0.205 bot
0.451 bot
0.535 bot
0.277 bot
0.951 bot
0.460 bot
0.266 bot
0.418 bot
0.088 bot
0.649 bot
0.409 bot
0.494 bot
0.771 bot
0.214 bot
0.392 bot
0.945 bot
0.157 bot
0.001 human
0.799 bot
0.903 bot
0.329 bot
0.854 bot
0.708 bot
0.642 bot
0.148 bot
0.403 bot
0.679 bot
0.333 bot
0.156 bot
0.921 bot
0.674 bot
0.938 bot
0.545 bot
0.500 bot
0.068 bot
0.139 bot
0.191 bot
1.000 bot
0.291 bot
0.716 bot
0.633 bot
0.667 bot
0.754 bot
0.977 bot
0.276 bot
0.324 bot
0.590 bot
0.826 bot
0.431 bot
0.695 bot
0.072 bot
0.450 bot
0.285 bot
0.042 human
0.738 bot
0.817 bot
0.833 bot
0.585 bot
0.879 bot
0.732 bot
0.489 bot
0.500 bot
0.220 bot
0.829 bot
0.523 bot
0.937 bot
0.273 bot
0.032 human
0.775 bot
0.541 bot
0.079 bot
0.574 bot
0.728 bot
0.867 bot
0.007 human
0.791 bot
0.765 bot
0.793 bot
0.054 bot
0.700 bot
0.065 bot
0.443 bot
0.245 bot
0.484 bot
0.789 bot
0.443 bot
0.860 bot
0.757 bot
  • cnrdetect()
    • Function that implements algorithm
  • Data (cnrexample1)
  • # categories/item (pointscales=rep(5,18))
    • 18 items, 5 categories each
  • NRIs (feat_funs)
    • "mahal" (MD), "ptcor" (PTC), "ptcossim" (PTCOSSIM)
    • User-defined NRI functions are experimental
  • Ideal points (feat_idvals)
    • Same order as feat_funs
  • Create flags (i.e., classify)

Take-home Activity

  • OSF repository: https://osf.io/vnuew/
  • Self-paced lab activity
    • LabPart2.html
    • Answer key: LabPart2Key.html

Algorithm Performance

Simulations: Original

  1. Ilagan & Falk (in press)
    • 👶 from openpsychometrics.org
      • 16 or 32 items from Humour Styles Questionnaire
    • Simulated 🤖

95% sensitivity maintained across all conditions

Simulations: Extended

Falk, Huang, & Ilagan (under review)

  1. Simulated 👶, simulated 🤖
    • “👶” from Item Response model1
    • 450 different conditions
      • item and test properties, sample size, contamination rate, bot type
  1. Real 👶, simulated 🤖
    • 36 datasets on openpsychometrics.org
      • 2019 and earlier, participation out of personal interest
    • Subsets of data, varied bot type and contamination rates

95% sensitivity maintained across all conditions

Classification accuracy?

\(>90\%\) accuracy when…

  1. Higher variability in item means
    • Good: 4.41, 3.76, 2.98, 2.31, 1.83 … (SD in means of around 1)
      • Caution: means for 👶, distorted under high 🤖 contamination
    • Bad: 3.35, 3.16, 2.99, 2.73, 3.34 …
      • Nearly exchangeable for humans
  • Implications for test construction
    • Means (or intercepts) are important
    • Focus only on factor loadings is misguided
    • Item Response Theory (IRT) concepts are valuable1

Classification accuracy?

\(>90\%\) accuracy when…

  1. More items
    • 25-50+ is good
    • \(\approx 25\) possible with good variability in item means
      • e.g., Right-Wing Authoritarianism
    • 50+ a safer choice
  2. More categories per item
    • 5+
  3. Other things that help
    • Larger sample size (300+)
    • Items of appropriate difficulty for sample (IRT concepts)
    • Decent factor loadings

Example accuracy sims with real 👶

Algorithm re-cap

  • Does not require measurement model or strong assumptions for 👶
  • Does not require 🤖 or 👶 exemplars
  • Provided items exchangeable for 🤖
    • Maintains desired sensitivity for detecting 🤖
    • Robust to how 🤖 are generated

Loose Ends/FAQ

What could you “pre-register”?

  • Overall plan of how to deal with 🤖
    • Advice from Part 1 holds1
  • This algorithm (🛠️) as part of plan(🧰)
    • Which items, NRIs (e.g., MD and PTC), # permutations (e.g., 1000), sensitivity (\(\alpha=.05\) for 95%)
  • How to deal with 🚩
    • Further inspection or exclusion rules
    • Reporting of results w/ and w/o flagged (be transparent)

Bots in the wild 🌴

  • Are uniform and binomial 🤖 realistic?
    • Form-filler 🤖 look uniform
  • Natural language processing
    • 🤖 may look more like 👶
    • Enterprise of online surveys 😭😭😭

More real-world testing needed; still useful for data re-analysis, random responders

FAQ

  • Multiple scales?1
    • LabPart2.html, pointscales=c(rep(4,42),rep(7,10))
  • Which NRIs? How many?
    • Simulation studies planned
    • \(>2\) possible, PTC vs. PTCOSSIM somewhat redundant
  • Reverse code items?
    • Simulation studies planned; MD is immune, other NRIs may be affected
    • Leave reverse-worded items reversed
  • Number of permutations
    • More = more stability, but similar overall performance
    • Experimental/development version of detranli to do parallel processing

Heterogeneity/Diverse Populations

  • Could increase false-positives
  • Simulations with real 👶 were fairly heterogenous
  • Should be more robust than model-based approaches
  • Consider \(\alpha=.1\) or \(90\%\) sensitivity
  • Be cautious about excluding those with 🚩
  • Sensitivity analysis (Ananlyses w/ and w/o 🚩)

Missing data

  • Can 🤖 induce missing data?
    • Could be easily programmed
    • “Don’t know”, “prefer not to answer”
  • Experimental/development version of detranli
    • FIML to obtain means and covariances
    • Use only available items

Accuracy

  • Can we obtain \(p\)-values, but try to optimize classification accuracy?
  • Mixture of \(p\)-values
    • 🤖 = uniform
    • 👶 = pile around low values

Workshop Recap

  1. Be familiar with the bot problem, and general strategies to deter/detect.
  2. Interpret traditional outlier statistics (nonresponsivity indices).
  3. Understand why some strategies for use of such statistics is not ideal.
  4. Understand machine learning/binary classification concepts.
  5. Understand basic theory of bot detection algorithm.
  6. Using R, apply and interpret algorithm results.

Thank you!

Ilagan, M.J., & Falk, C. F. (in press). Model-agnostic unsupervised detection of bots in a Likert-type questionnaire. Behavior Research Methods. https://doi.org/ 10.3758/s13428-023-02246-7

Falk, C. F., Huang, A., & Ilagan, M. J. Unsupervised bot detection: In search of high classification accuracy. Manuscript under review. Preprint: https://osf.io/preprints/psyarxiv/4nmxh

  • Funding
    • Thank you to SSHRC and FRQ-NT

Extra slides

Details and visualization

res <- cnrdetect(cnrexample1, pointscales=rep(5,18),
                   details=TRUE)

flags <- ifelse(res$pvals < .05, "human", "bot")

# NRIs for one observation vs. its synthetic bots
idx <- 10 # observation 10
obs_nri <- res$obs_nri[idx,] # NRIs
synth_nri <- res$synth_nris[[idx]] # Synth bot NRIs

ggplot(synth_nri, aes(x=mahal, y=ptcossim)) +
  geom_point(size=2, alpha=.5) +
  geom_point(aes(x=obs_nri[1],y=obs_nri[2]), color="blue", size=4)

  • details=TRUE, saves more details
    • Leave-one-out NRIs
      • Original rows (res$obs_nris)
      • Synthetic bots (res$synth_nris)
    • Synthetic bot Likert-type responses (res$synth_likert)