Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
2.
J Exp Psychol Gen ; 152(7): 2008-2025, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37104799

RESUMEN

Are people more or less likely to follow numerical advice that communicates uncertainty in the form of a confidence interval? Prior research offers competing predictions. Although some research suggests that people are more likely to follow the advice of more confident advisors, other research suggests that people may be more likely to trust advisors who communicate uncertainty. Participants (N = 17,615) in 12 incentivized studies predicted the outcomes of upcoming sporting events, the preferences of other survey responders, or the number of deaths due to COVID-19 by a future date. We then provided participants with an advisor's best guess and manipulated whether or not that best guess was accompanied by a confidence interval. In all but one study, we found that participants were either directionally or significantly more likely to choose the advisor's forecast (over their own) when the advice was accompanied by a confidence interval. These results were consistent across different measures of advice following and did not depend on the width of the confidence interval (75% or 95%), advice quality, or on whether people had information about the advisor's past performance. These results suggest that advisors may be more persuasive if they provide reasonably-sized confidence intervals around their numerical estimates. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
COVID-19 , Humanos , Intervalos de Confianza , Incertidumbre , Confianza , Comunicación Persuasiva
3.
J Exp Psychol Gen ; 152(2): 571-589, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36095168

RESUMEN

Can overconfidence be reduced by asking people to provide a belief distribution over all possible outcomes-that is, by asking them to indicate how likely all possible outcomes are? Although prior research suggests that the answer is "yes," that research suffers from methodological confounds that muddle its interpretation. In our research, we remove these confounds to investigate whether providing a belief distribution truly reduces overconfidence. In 10 studies, participants made predictions about upcoming sports games or other participants' preferences, and then indicated their confidence in these predictions using rating scales, likelihood judgments, and/or incentivized wagers. Contrary to prior research, and to our own expectations, we find that providing a belief distribution usually increases overconfidence, because doing so seems to reinforce people's prior beliefs. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Juego de Azar , Juicio , Humanos
4.
Psychol Sci ; 32(2): 159-172, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33400628

RESUMEN

Previous research suggests that choice causes an illusion of control-that it makes people feel more likely to achieve preferable outcomes, even when they are selecting among options that are functionally identical (e.g., lottery tickets with an identical chance of winning). This research has been widely accepted as evidence that choice can have significant welfare effects, even when it confers no actual control. In this article, we report the results of 17 experiments that examined whether choice truly causes an illusion of control (N = 10,825 online and laboratory participants). We found that choice rarely makes people feel more likely to achieve preferable outcomes-unless it makes the preferable outcomes actually more likely-and when it does, it is not because choice causes an illusion but because choice reflects some participants' preexisting (illusory) beliefs that the functionally identical options are not identical. Overall, choice does not seem to cause an illusion of control.


Asunto(s)
Ilusiones , Emociones , Humanos , Probabilidad
5.
Nat Hum Behav ; 4(11): 1215, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33037398

RESUMEN

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

6.
Nat Hum Behav ; 4(11): 1208-1214, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32719546

RESUMEN

Empirical results hinge on analytical decisions that are defensible, arbitrary and motivated. These decisions probably introduce bias (towards the narrative put forward by the authors), and they certainly involve variability not reflected by standard errors. To address this source of noise and bias, we introduce specification curve analysis, which consists of three steps: (1) identifying the set of theoretically justified, statistically valid and non-redundant specifications; (2) displaying the results graphically, allowing readers to identify consequential specifications decisions; and (3) conducting joint inference across all specifications. We illustrate the use of this technique by applying it to three findings from two different papers, one investigating discrimination based on distinctively Black names, the other investigating the effect of assigning female versus male names to hurricanes. Specification curve analysis reveals that one finding is robust, one is weak and one is not robust at all.


Asunto(s)
Interpretación Estadística de Datos , Modelos Teóricos , Investigación/normas , Adulto , Visualización de Datos , Humanos , Modelos Estadísticos , Proyectos de Investigación/normas
7.
J Exp Psychol Gen ; 149(5): 870-888, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-31886705

RESUMEN

How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we offer a theory called prospective outcome bias. According to this theory, people tend to make decisions that they expect to feel good about after the outcome has been realized. Because people expect to feel best about decisions that are followed by successes-even when the decisions did not cause those successes-they will pay more to increase their chances of success when success is already likely (e.g., people will pay more to increase their probability of success from 80% to 90% than from 10% to 20%). We find evidence for prospective outcome bias in nine experiments. In Study 1, we establish that people evaluate costly decisions that precede successes more favorably than costly decisions that precede failures, even when the decisions did not cause the outcome. Study 2 establishes, in an incentive-compatible laboratory setting, that people are more motivated to increase higher chances of success. Studies 3-5 generalize the effect to other contexts and decisions and Studies 6-8 indicate that prospective outcome bias causes it (rather than regret aversion, waste aversion, goals-as-reference-points, probability weighting, or loss aversion). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: people prefer small increases in the probability of large rewards (e.g., a 1% improvement in their chances of winning $100) to large increases in the probability of small rewards (e.g., a 10% improvement in their chances of winning $10). (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Toma de Decisiones , Emociones , Recompensa , Adulto , Femenino , Humanos , Masculino , Probabilidad , Adulto Joven
8.
J Exp Psychol Gen ; 148(9): 1628-1639, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31464485

RESUMEN

Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper to assess their overall evidential value. Advocates of internal meta-analysis argue that it provides an efficient approach to increasing statistical power and solving the file-drawer problem. Here we show that the validity of internal meta-analysis rests on the assumption that no studies or analyses were selectively reported. That is, the technique is only valid if (a) all conducted studies were included (i.e., an empty file drawer), and (b) for each included study, exactly one analysis was attempted (i.e., there was no p-hacking). We show that even very small doses of selective reporting invalidate internal meta-analysis. For example, the kind of minimal p-hacking that increases the false-positive rate of 1 study to just 8% increases the false-positive rate of a 10-study internal meta-analysis to 83%. If selective reporting is approximately zero, but not exactly zero, then internal meta-analysis is invalid. To be valid, (a) an internal meta-analysis would need to contain exclusively studies that were properly preregistered, (b) those preregistrations would have to be followed in all essential aspects, and (c) the decision of whether to include a given study in an internal meta-analysis would have to be made before any of those studies are run. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Metaanálisis como Asunto , Humanos , Sesgo de Publicación , Reproducibilidad de los Resultados
9.
PLoS One ; 14(3): e0213454, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30856227

RESUMEN

p-curve, the distribution of significant p-values, can be analyzed to assess if the findings have evidential value, whether p-hacking and file-drawering can be ruled out as the sole explanations for them. Bruns and Ioannidis (2016) have proposed p-curve cannot examine evidential value with observational data. Their discussion confuses false-positive findings with confounded ones, failing to distinguish correlation from causation. We demonstrate this important distinction by showing that a confounded but real, hence replicable association, gun ownership and number of sexual partners, leads to a right-skewed p-curve, while a false-positive one, respondent ID number and trust in the supreme court, leads to a flat p-curve. P-curve can distinguish between replicable and non-replicable findings. The observational nature of the data is not consequential.

10.
Psychol Sci ; 30(2): 159-173, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30561244

RESUMEN

When estimating unknown quantities, people insufficiently adjust from values they have previously considered, a phenomenon known as anchoring. We suggest that anchoring is at least partially caused by a desire to avoid making extreme adjustments. In seven studies ( N = 5,279), we found that transparently irrelevant cues of extremeness influenced people's adjustments from anchors. In Studies 1-6, participants were less likely to adjust beyond a particular amount when that amount was closer to the maximum allowable adjustment. For example, in Study 5, participants were less likely to adjust by at least 6 units when they were allowed to adjust by a maximum of 6 units than by a maximum of 15 units. In Study 7, participants adjusted less after considering whether an outcome would be within a smaller distance of the anchor. These results suggest that anchoring effects may reflect a desire to avoid adjustments that feel too extreme.


Asunto(s)
Juicio/fisiología , Adulto , Femenino , Humanos , Masculino
11.
Perspect Psychol Sci ; 13(2): 255-259, 2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-29592640

RESUMEN

We describe why we wrote "False-Positive Psychology," analyze how it has been cited, and explain why the integrity of experimental psychology hinges on the full disclosure of methods, the sharing of materials and data, and, especially, the preregistration of analyses.

12.
Psychol Sci ; 29(4): 504-520, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29466077

RESUMEN

Research suggests that people prefer confident to uncertain advisors. But do people dislike uncertain advice itself? In 11 studies ( N = 4,806), participants forecasted an uncertain event after receiving advice and then rated the quality of the advice (Studies 1-7, S1, and S2) or chose between two advisors (Studies 8-9). Replicating previous research, our results showed that confident advisors were judged more favorably than advisors who were "not sure." Importantly, however, participants were not more likely to prefer certain advice: They did not dislike advisors who expressed uncertainty by providing ranges of outcomes, giving numerical probabilities, or saying that one event is "more likely" than another. Additionally, when faced with an explicit choice, participants were more likely to choose an advisor who provided uncertain advice over an advisor who provided certain advice. Our findings suggest that people do not inherently dislike uncertain advice. Advisors benefit from expressing themselves with confidence, but not from communicating false certainty.


Asunto(s)
Toma de Decisiones , Autoimagen , Incertidumbre , Adulto , Femenino , Humanos , Masculino , Personalidad
13.
J Pers Soc Psychol ; 113(5): 659-670, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28737416

RESUMEN

Across 4,151 participants, the authors demonstrate a novel framing effect, attribute matching, whereby matching a salient attribute of a decision frame with that of a decision's options facilitates decision-making. This attribute matching is shown to increase decision confidence and, ultimately, consensus estimates by increasing feelings of metacognitive ease. In Study 1, participants choosing the more attractive of two faces or rejecting the less attractive face reported greater confidence in and perceived consensus around their decision. Using positive and negative words, Study 2 showed that the attribute's extremity moderates the size of the effect. Study 3 found decision ease mediates these changes in confidence and consensus estimates. Consistent with a misattribution account, when participants were warned about this external source of ease in Study 4, the effect disappeared. Study 5 extended attribute matching beyond valence to objective judgments. The authors conclude by discussing related psychological constructs as well as downstream consequences. (PsycINFO Database Record


Asunto(s)
Conducta de Elección/fisiología , Reconocimiento Facial/fisiología , Juicio/fisiología , Metacognición/fisiología , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad
14.
15.
J Exp Psychol Gen ; 145(10): 1298-1311, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27505154

RESUMEN

[Correction Notice: An Erratum for this article was reported in Vol 145(10) of Journal of Experimental Psychology: General (see record 2016-42695-001). In the article, the symbols in Figure 2 were inadvertently altered in production. All versions of this article have been corrected.] In this article, we investigate whether making detailed predictions about an event worsens other predictions of the event. Across 19 experiments, 10,896 participants, and 407,045 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes useless or redundant information more accessible and thus more likely to be incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of events will and will not be susceptible to the negative effect of making detailed predictions.


Asunto(s)
Toma de Decisiones/fisiología , Predicción , Adulto , Conducta de Elección/fisiología , Femenino , Humanos , Masculino
16.
J Exp Psychol Gen ; 144(6): 1146-52, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26595842

RESUMEN

When studies examine true effects, they generate right-skewed p-curves, distributions of statistically significant results with more low (.01 s) than high (.04 s) p values. What else can cause a right-skewed p-curve? First, we consider the possibility that researchers report only the smallest significant p value (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (a) p-curvers selecting the wrong p values, (b) fake data, (c) honest errors, and (d) ambitiously p-hacked (beyond p < .05) results. We evaluate the impact of these common problems on the validity of p-curve analysis, and provide practical solutions that substantially increase its robustness.


Asunto(s)
Interpretación Estadística de Datos , Sesgo de Publicación/estadística & datos numéricos , Proyectos de Investigación/estadística & datos numéricos , Mala Conducta Científica/estadística & datos numéricos , Humanos
17.
J Exp Psychol Gen ; 144(1): 114-26, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25401381

RESUMEN

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.


Asunto(s)
Algoritmos , Reacción de Prevención , Cultura , Predicción , Comprensión , Toma de Decisiones , Femenino , Humanos , Masculino , Motivación , Adulto Joven
18.
J Exp Psychol Gen ; 143(2): 534-47, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23855496

RESUMEN

Because scientists tend to report only studies (publication bias) or analyses (p-hacking) that "work," readers must ask, "Are these effects true, or do they merely reflect selective reporting?" We introduce p-curve as a way to answer this question. P-curve is the distribution of statistically significant p values for a set of studies (ps < .05). Because only true effects are expected to generate right-skewed p-curves-containing more low (.01s) than high (.04s) significant p values--only right-skewed p--curves are diagnostic of evidential value. By telling us whether we can rule out selective reporting as the sole explanation for a set of findings, p-curve offers a solution to the age-old inferential problems caused by file-drawers of failed studies and analyses.


Asunto(s)
Sesgo de Publicación , Interpretación Estadística de Datos , Humanos , Modelos Estadísticos , Psicología Experimental , Reproducibilidad de los Resultados , Estadística como Asunto
19.
Perspect Psychol Sci ; 9(6): 666-81, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26186117

RESUMEN

Journals tend to publish only statistically significant evidence, creating a scientific record that markedly overstates the size of effects. We provide a new tool that corrects for this bias without requiring access to nonsignificant results. It capitalizes on the fact that the distribution of significant p values, p-curve, is a function of the true underlying effect. Researchers armed only with sample sizes and test results of the published findings can correct for publication bias. We validate the technique with simulations and by reanalyzing data from the Many-Labs Replication project. We demonstrate that p-curve can arrive at conclusions opposite that of existing tools by reanalyzing the meta-analysis of the "choice overload" literature.


Asunto(s)
Sesgo de Publicación , Estadística como Asunto , Simulación por Computador
20.
J Pers Soc Psychol ; 103(6): 933-948, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22924750

RESUMEN

Across 7 experiments (N = 3,289), we replicate the procedure of Experiments 8 and 9 from Bem (2011), which had originally demonstrated retroactive facilitation of recall. We failed to replicate that finding. We further conduct a meta-analysis of all replication attempts of these experiments and find that the average effect size (d = 0.04) is no different from 0. We discuss some reasons for differences between the results in this article and those presented in Bem (2011).


Asunto(s)
Anticipación Psicológica/fisiología , Cognición/fisiología , Recuerdo Mental/fisiología , Pruebas Neuropsicológicas/normas , Percepción/fisiología , Adulto , Femenino , Humanos , Masculino , Parapsicología/normas , Psicolingüística/métodos , Factores de Tiempo , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA