- 1Goal-Oriented Agents Lab (GOAL), Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche (ISTC-CNR), Rome, Italy
- 2Department of Environmental Biology, University of Rome “La Sapienza”, Rome, Italy
- 3Department of Cell Biology and Neurosciences, Istituto Superiore di Sanità, Rome, Italy
- 4Bambino Gesù Children’s Hospital IRCCS, Rome, Italy
The search for neuronal and psychological underpinnings of pathological gambling in humans would benefit from investigating related phenomena also outside of our species. In this paper, we present a survey of studies in three widely different populations of agents, namely rodents, non-human primates, and robots. Each of these populations offers valuable and complementary insights on the topic, as the literature demonstrates. In addition, we highlight the deep and complex connections between relevant results across these different areas of research (i.e., cognitive and computational neuroscience, neuroethology, cognitive primatology, neuropsychiatry, evolutionary robotics), to make the case for a greater degree of methodological integration in future studies on pathological gambling.
Introduction
Gambling can be defined as betting money, or other equivalent goods, upon the future outcome of an event which presents a high degree of uncertainty, with a view to winning a prize. Winning is mainly (or exclusively) due to chance and not much (or not at all) to individual abilities. While betting may represent a recreational activity for the majority of people, it may become a serious behavioral disorder for others (Petry et al., 2005). The rapid worldwide growth of legalised gaming opportunities (Wilber and Potenza, 2006; McCormack et al., 2012; Donati et al., 2013), including the increasing possibility of online gambling through the Internet, has raised concerns over the impact of exaggerated gambling and its detrimental consequences on public health (Shaffer and Korn, 2002; Carragher and McWilliams, 2011). Thus, due to the increasing number of affected people, pathological gambling represents a growing concern for society.
In fact, this behavior is clinically characterized as a pathology: in DSM-IV-TR (American Psychiatric Association, 2000), it was described as a persistent, recurrent and maladaptive behavior, which disrupts personal, family, professional or vocational pursuits (Potenza, 2001). The personal and social consequences of this disorder often include job loss, family problems and divorce, financial and legal problems, and criminal behavior (Lowengrub et al., 2006). Pathological gambling affects 0.2–5.3% of adults in western socities (Bastiani et al., 2013) and is highly comorbid with a range of other psychiatric disorders such as attention-deficit/hyperactivity disorder (ADHD; and other impulse-control disorders, obsessive-compulsive disorders; Hollander et al., 2005) and with substance abuse (Petry et al., 2005; Hodgins et al., 2011). Some pathological features of gambling are similar to those of drug addiction, such as the need to gamble increasing amounts of money (escalation) in order to achieve the desired excitement or “rush” (tolerance), the irritability that accompanies the abstention from the activity (withdrawal), the failure of attempts to control or stop the behavior (loss of control). Notably, whilst pathological gambling has been classified until recently (in DSM-III and DSM-IV) among the “Impulse-Control Disorders Not Elsewhere Classified”, it has been turned into a “no substance addiction” in DSM-V (American Psychiatric Association, 2013), that is a “behavioral addiction”. Pathological gambling is also associated with increased suicidal ideation and attempts compared to the general population: approximately one out of five pathological gamblers attempts suicide (Volberg, 2002). Such rates among pathological gamblers are higher than for any other addictive disorder. Thus, gambling represents a public concern being both a social and a psychiatric issue.
Far from being an adult concern, gambling is becoming a serious behavioral problem also among adolescents (Cunningham-Williams and Cottler, 2001; Dickson et al., 2002), whose involvement has increased substantially over the past 20 years (Huang and Boyer, 2007). Epidemiological studies show that the prevalence of pathological gambling is 2–4 times higher among adolescents than among adults, with 3.5–8.0% of adolescents meeting the criteria for such pathology (Felsher et al., 2004; Ellenbogen et al., 2007; Hodgins et al., 2011; Caillon et al., 2012). Adolescence and young adulthood may be periods of especially heightened vulnerability for the development of gambling disorders, which are therefore receiving increasing attention by clinicians and preclinical researchers (Jazaeri and Habil, 2012; Zoratto et al., 2013).
The etiology of pathological gambling is multi-factorial; both genetic (e.g., a polymorphism in the serotonin transporter gene; Ibanez et al., 2003) and socio-environmental (e.g., Donati et al., 2013; Potenza, 2013) risk-factors have been identified. Moreover, cognitive models of gambling argue that irrational beliefs and erroneous perceptions may play a key role (Reid, 1986; Clark, 2010). Indeed, some authors argue that expectancies of winning, illusions of control, and subsequent entrapment do contribute to the development and the maintenance of gambling patterns (Joukhador et al., 2003). Psycho-genetic studies have revealed that, among genes involved in altered serotonergic and dopaminergic neurotransmission, the most significant for pathological gambling are serotonin transporter (SERT; Ibanez et al., 2003; Reuter et al., 2005) and dopamine transporter (DAT; Comings et al., 2001).
Methods for treating pathological gambling include various counselling-based approaches and pharmacological therapy, although there are no drugs which have been officially approved for the specific treatment of pathological gambling by the U.S. Food and Drug Administration (FDA). Therefore, in pathological gamblers, drugs are mainly prescribed for the treatment of the comorbid conditions and not for the pathology itself (Hollander et al., 2005). Pathological gamblers respond well to treatment with selective serotonin reuptake inhibitors (SSRIs, particularly paroxetine; Kim et al., 2002), mood stabilizers, and opioid antagonists (such as nalmefene), commonly used in the treatment of alcoholism (see for a review Lowengrub et al., 2006).
In view of the growing incidence of pathological gambling, its severe mental and social consequences, and the still preliminary nature of its treatment, it is urgent to mobilize various approaches and methods to further deepen our understanding of the neuronal and psychological underpinnings of this condition. Indeed, the present Research Topic constitutes an important and timely initiative towards that end. The contribution we offer in this review concerns how evidence obtained on nonhuman subjects is crucial to investigate pathological gambling in humans. In particular, we make the case for studying three widely different populations of agents: rodents (Section Rodents as an Animal Model of Gambling Behavior), nonhuman primates (Section Risky Choices in Nonhuman Primates: Implications for Human Pathological Gambling), and robots (Section Risk Attitudes, Environmental Uncertainty and Addictive Behavior: Perspectives From Computational Neuroscience and Evolutionary Robotics). While each of these populations offer valuable insights on the topic, their true worth is revealed only by looking at how they relate to each other. Hence we will review the literature across all these areas of research (i.e., cognitive and computational neuroscience, neuroethology, cognitive primatology, neuropsychiatry, evolutionary robotics), with the aim of suggesting the need for greater methodological integration in future studies on laboratory modeling of pathological gambling.
Rodents as an Animal Model of Gambling Behavior
In the field of behavioral neuroscience, animal models enable the investigation of brain-behavior relations under controlled conditions (e.g., standardized housing and testing), with the aim of gaining insight into normal and abnormal human behavior and its underlying neural, psychobiological and neuro-endocrinological processes (van der Staay, 2006). In particular, they are particularly suitable for the dissection of precise mechanisms involved in decision-making processes, for the analysis of inter-individual differences with a tight control of environmental and genetic conditions, and for follow-up studies (de Visser et al., 2011). As we shall see in what follows, these considerations do apply also to the study of gambling behavior, and especially to the use of rodents (mostly rats) as an animal model for risk proneness (e.g., Adriani et al., 2009, 2010).
Assessment of Gambling Proneness: Clinical and Preclinical Approaches
In humans, Probability Discounting can be studied by means of either questionnaires or operant paradigms. The “South Oaks Gambling Screen” (for adults Lesieur and Blume, 1987; for adolescents Wiebe et al., 2000), the “Gambling Attitudes and Beliefs Survey” (Strong et al., 2004) and the “Canadian Problem Gambling Index” (Young and Wohl, 2011) are some examples of personality tests and reports, widely used in the framework of clinical psychology and experimental research. In these protocols, gamblers are characterized with scores that represent their averaged behavior over periods of weeks, months or years whilst the time spans that most naturally correspond to the expression of gambling behavior are those of seconds, minutes or hours. The main limitation of these traditional methods regards therefore the lack of an appropriate temporal dimension (van den Bos et al., 2013). By contrast, controlled experimental or clinical paradigms such as the “Iowa Gambling Task” (IGT; Bechara et al., 1994), the “Balloon Analogue Risk Task” (Lejuez et al., 2002) and the “Probability Discounting Task” (e.g., Scheres et al., 2006; Shead and Hodgins, 2009) allow to overcome the above mentioned limitation regarding the temporal dimension. However, as extensively discussed in van den Bos et al. (2013), they are characterized by a second limitation, i.e., the lack of appropriate context due to the artificial conditions of a laboratory environment. It should also be noted that these paradigms can be performed with either real rewards over limited time intervals (e.g., minutes, hours) or with questions about hypothetical ones (e.g., huge amounts of money) over months or years.
Due to the complexity of human studies, preclinical investigations in laboratory animal models are necessary for a deeper understanding of pathological gambling. Specifically, it is relevant to exploit preclinical models for (i) the symptoms; (ii) their neurobiological determinants; and (iii) their possible modulation by pharmacological manipulation. Specifically, these studies are crucial as they allow the dissection of processes and factors associated with normal and pathological gambling in a controlled way (de Visser et al., 2011; Winstanley et al., 2011; Koot et al., 2012). Furthermore, animal models have added value from a translational perspective because it is possible to use approaches that are virtually impossible with humans, as in the case of in vivo transgenic approaches that allow to directly reach and modulate expression of target genes in relevant brain areas (Adriani et al., 2010).
Many operant paradigms have been developed to study tolerance to uncertainty and/or gambling proneness in animal models (Mobini et al., 2000; Cardinal and Howes, 2005; Adriani et al., 2006; Wilhelm and Mitchell, 2008; Winstanley et al., 2011). Specifically, by exploiting uncertainty of reward delivery, these tasks allow to probe individual (in)tolerance to frustration, linked to missing an anticipated reward (i.e., the “loss”). The “IGT” involves the choice between a low probability of a large reward vs. a high probability of a small food reward (van den Bos et al., 2006). The “Probabilistic-Delivery Task” (PDT; which belongs to the broader category of Probability Discounting) is based on a choice between either a certain, small amount of food reward or larger amounts delivered (or not) depending on a given (and progressively decreasing) probability (Adriani and Laviola, 2006; Adriani et al., 2006). The “Risky Decision-Making Task” (RDT) implies the choice between a small, “safe” food reward or a larger food reward associated with the risk of punishment (e.g., footshock; Simon et al., 2009). The “rodent Slot Machine Task” (rSMT) allows to evaluate if the experimental subject discriminates a complete signal (e.g., three lights turned on, indicative of win) from a nearly complete one (e.g., two lights out of three, indicative of loss): by means of this task, it has been recently demonstrated that rats are susceptible to putative-win signals in non-winning trials (Winstanley et al., 2011; Cocker et al., 2013). Such a phenomenon might resemble the so-called “near-miss effect”, one of the cognitive distortion regarding gambling outcomes that is thought to confer vulnerability to pathological gambling (Reid, 1986; Clark, 2010; see also Section Normative (Algorithmic) Models).
Notably, the “IGT” and the “Probability Discounting Task” are widely used in experimental or clinical research on humans. Obviously, when performed on animals, these paradigms involve real, ethologically relevant rewards over limited time intervals. Symbolic reward (as money in humans) or time intervals longer than few hours cannot be used. Moreover, to be effective, the contrast between alternative rewards (e.g., small vs. large one) can not be as marked as it would be desired to mimic 1000-fold prizes as in humans. In these tasks, in which a moderate food restriction is usually applied to increase subjects’ motivation to work for food delivery, the rewards’ magnitude shall be accurately calibrated in order to (i) allow animals to eat enough food; (ii) prevent them from being fully satiated; and (iii) enable them to discriminate between rewards. The first aspect is especially relevant in “closed” (compared to “open”) economies, in which subjects have to obtain all their daily meal from the operant panels and no extra food is given at the end of each experimental session (Timberlake and Peden, 1987; Zoratto et al., 2012). The second one is necessary to avoid a potential recovery from the consequences of the food loss (occurring because of the probabilistic delivery). The last one can be crucial for the establishment of basal preference in developing rats (Zoratto et al., 2013). We have recently shown that high contrast between rewards (one pellet vs. five pellets instead of two pellets vs. six pellets) and high probability initially associated, during training, with the large reward (66% instead of 50%) are essential to shorten the overall testing period: namely, much less sessions are required for the development of baseline large-reward preference (which is otherwise slow in young animals). This is of paramount importance to overcome the developmental constraint associated with the short duration of the adolescent phase (Laviola et al., 2003).
These operant-behavior tasks imply a series of discrete decisions between two reward alternatives (Adriani et al., 2012a). In terms of automatization, the experimental apparatus requires two alternative operanda (e.g., levers or nose-poking holes, where the animal can express its choice), and computer-controlled delivery of reinforcers (e.g., food or liquids) that differ in size and actual probability of delivery (uncertainty). Other important features of the task are inherent to the trial/session schedule. For instance, the total number of choice opportunities (i.e., trials) given to the subject may be fixed (i.e., the session ends after the last trial) and independent of total time needed to complete the task. Alternatively, the total duration of the experimental session may be fixed (minutes, hours) and thus independent of the total number of trials actually completed within such time-window (Koot et al., 2012).
The protocols reviewed above probe animals for the balance between “innate, sub-cortical” drives and “evolved, cortical” processes (Adriani and Laviola, 2009). In other words, these operant tasks allow to evaluate a cognitive ability, i.e., to inhibit sub-cortical drives and to express a more controlled response. Self-control is known to require intact serotonergic function (Wogar et al., 1993; Harrison et al., 1997; Puumala and Sirvio, 1998; Dalley et al., 2002), especially within the prefrontal cortex (McClure et al., 2004; Ridderinkhof et al., 2004) and its cortico-striatal projections (Cardinal et al., 2004; Christakou et al., 2004).
The Probabilistic-Delivery Task (PDT)
The “PDT” (Mobini et al., 2000; Adriani and Laviola, 2006) involves a larger but probabilistic reinforcer which is randomly withheld by the feeding device, and delivered only occasionally so that experimental subjects face a “loss”. The progressively accumulating “losses” over time clearly have consequences for the sake of long-term payoff. Such a task also provides information reflecting the ability to cope with non-regularly delivered, randomly missing reinforcement. We have shown recently that laboratory rodents are not only tolerant to this random delivery, but are also sub-optimally attracted by this probabilistic uncertainty (Adriani and Laviola, 2006, 2009). Indeed, if the very frequent food-delivery omission is masked by the same cue (e.g., a light flash) normally accompanying occasional food delivery, this cue may turn out to act as a secondary reinforcer. As such, like in second-order schedules, this conditioned stimulus may sustain continued responding for the large/uncertain reward, even though this implies a decreased overall foraging in the long term. Gambling proneness may thus be sustained by the cue-induced secondary reward, which renovates in the subject the expectation for an eventual delivery of binge reward (Adriani and Laviola, 2006, 2009). Translated to human subjects, this would suggest that it is the thrill—associated with whatever physical stimuli accompanies both successful and unsuccessful gambling experiences—that sustain a motivation to gamble, in spite of abysmal odds and past (mostly negative) experience: looking at the ball madly spinning on the roulette and waiting for the crucial card to be turned, with a mix of hope for success and fear of loss, become rewarding in themselves, and it is in view of these (certain) rewards that people start enjoying gambling activities. Until the individual can keep under control the desire, these activities have nothing wrong in themselves. However, in vulnerable individuals, eventually a loss of control over these activities may intervene: pathological gamblers keep on gambling as this compulsive “urge” becomes a strong habit, not differently from other kinds of addictions (van den Bos et al., 2013).
Methodological remarks on the probabilistic-delivery task (PDT)
A theoretical framework has been recently formulated to interpret the performance of laboratory rats in this kind of two-choice tasks (Adriani and Laviola, 2006). Specifically, a landmark in the PDT protocol is the “indifference” point: i.e., the specific level of uncertainty at which the animals can choose either option freely with no effect on the overall economic convenience. As an example, if the ratio between large and small reward size is five-fold then the indifference point is at “p” = 20%. Therefore, once the “indifference” point is established, the range of “p” values providing worthy information is easily recognized at “p” values beyond the indifference point (i.e., 20% > “p” > 0%), when economical benefit (i.e., maximization of payoff) is attained unequivocally by choosing repeatedly the small-reward option. Thus, to maximize the payoff, subjects should be flexible enough to abandon their innate large-reward preference. As optimal performance in terms of benefit takes the form of a choice-shift towards small reward, this requires a self-control effort in order to overcome the “innate drive” that justifies its attractiveness (Adriani et al., 2006). By contrast, a sustained preference for large reward denotes “temptation by risk”.
In this kind of two-choice tasks, details of the schedule can be calibrated appropriately (Adriani and Laviola, 2009), so that one alternative option leads to “optimal” benefit (i.e., the raw convenience in terms of quantitative foraging or any other measurable revenue), while the other alternative provides an “affective” benefit, with a more emotional outcome (i.e., better feeling and/or avoidance of adverse mood). In brief, to run a protocol providing useful information, any “inner drive” of interest (e.g., gambling proneness) shall push animals into a choice that necessarily leads to a sub-optimal outcome. Self-control is then defined as the ability to effect an optimal response (Stephens and Anderson, 2001) by directing choices onto the opposite operandum (nose-poking hole or lever to press). The protocol must never load both instances (i.e., the inner drive and the optimal payoff) on the same operandum because it would be impossible to discriminate whether any preference for that operandum is due to payoff-detecting processes (“economical efficiency”) or to the “inner drive” itself.
Probabilistic-delivery task (PDT) at very low probability levels
Many factors can act together to push animals towards a suboptimal preference for a large reward, even though this is delivered quite rarely. One factor is insensitivity to risk, whereby the subjects are unable (i) to figure the uncertainty in the outcome (usually, they should anticipate the notion that reward is not for sure, which acts as a source of aversion immediately before choice) or (ii) to perceive the punishment of “losses” (represented by the occurrence of a randomly and frequently omitted delivery of reward).
Another factor is habit-induced rigidity, under which the subject seems to behave according to a well consolidated strategy. Such form of inflexibility may be due to a failure of negative reinforcement, namely to a lack of adaptation and feedback-reaction to the aversion (for an anticipated “unsure” prize) and/or to the punishment (due to an actually “omitted” prize) just described.
A third factor is temptation to gamble, whereby the motivational impact of the reward magnitude (“bingeing”) seems to monopolize the subject’s attention over any other reward feature. It is also possible that risk of punishment under conditions of uncertainty becomes attractive as a secondary conditioned feature, and this because the “binge” reward (eventually delivered) may well be generating an overwhelming peak of positive reinforcement. The latter could extend a secondary rewarding property to all cues and surrounding stimuli that predict uncertain features. Whatever of these factors is prevalent in the PDT and in similar tasks, the sub-optimal preference for big, rarefied reward is taken as an index of “gambling proneness” (namely, the innate attraction for a “rare but binge” event).
“Risk of Losing” vs. “Failing to Win”
A crucial component of human gambling is the “risk of losing”, that is, “the resources staked on a favorable outcome are lost when a wager is unsuccessful” (Zeeb et al., 2009). This is distinct from “failing to win”, that is, the absence of any additional gain, causing a “frustration” but only compared to one’s expectation.
Most paradigms of risky decision-making (Mobini et al., 2000; Cardinal and Howes, 2005; Adriani and Laviola, 2006; van den Bos et al., 2006) deal exclusively with “failing to win”: i.e., complete omission of reward delivery, or delivery of an unpalatable reward. Thus, there is frustration of an expectation but no risk of “negative payoff”, i.e., of finishing the session at a disadvantage compared with the start. In other words, every case of unsuccess is an “unlucky event” but not necessarily a “risk”. Therefore, while the attraction for uncertain reward may resemble the features of a “gambling proneness”, it is not necessarily fitting with the construct of “risk proneness” (on this point, see Anselme, 2012). Therefore, it should be noted that “uncertainty” and “risk” are not synonymous:1 indeed, the PDT and similar tasks do offer stochastic “unsuccess” which is even a “punishment”, but not necessarily a “risk” which would need a construct implying a potential for overtly adverse consequences (e.g., footshock).
Recently, however, choice behavior has been also studied in a setting where a greater reward was associated with the probability of an overtly adverse event (i.e., the “risk”), represented by a foot shock (Simon et al., 2009). This can represent a promising methodological refinements of paradigms tailored for gambling proneness, although its ethical implications (especially when dealing with non-human primates) should be carefully evaluated.
Another attempt to deal with this issue is represented by the “Rat gambling task” (rGT; Zeeb et al., 2009). In this task, subjects have a limited amount of time to maximize the number of pellets earned, and loss is signaled by punishing timeouts during which reward cannot be obtained. On each trial, animals can choose from four options, each associated with a different number of sugar pellets; each subject then receives either the associated reward or a punishing timeout. Larger reward options are associated with a higher chance of longer timeouts, resulting in less reward earned overall per session. To maximize their earnings, rats must learn to avoid these risky options.
The Ecological Validity of Animal Models of Human (Pathological) Gambling
Classically, the performance of laboratory animals on tasks tailored for gambling proneness is investigated by placing the animals (in most cases laboratory rodents, primarily rats) individually in operant chambers for a short daily session (Evenden and Ryan, 1996, 1999; Mobini et al., 2000, 2002; Adriani et al., 2009). Thus, differences across laboratories in working environments and in human interventions (e.g., handling and transport to a novel testing room) may compromise the reliability and reproducibility of behavioral data (Crabbe et al., 1999; Wahlsten et al., 2003).
Therefore, for the ecological validity of animal models of human (pathological) gambling, it is critical to address some crucial issues (van den Bos et al., 2013). Firstly, confounding factors such as stress due to handling, facing a new environment and social isolation should be avoided (e.g., de Visser et al., 2006; Spruijt and de Visser, 2006; Koot et al., 2009, 2012; Zoratto et al., 2013). Secondly, the level of tasks’ automation should be increased, since the involvement of the experimenter during testing procedures (and for scoring behavior) may be difficult to standardize: indeed, results may often strongly vary between laboratories (Crabbe et al., 1999; Chesler et al., 2002). Thirdly, tasks incorporating a social component should be used, to assess the impact of social factors on gambling proneness. It is well known, indeed, that the social environment in humans may have an undeniable effect on the development and maintenance of pathological gambling. Finally, innovative tasks should be developed that allow the investigation of normal time-budget (and its potential disruptions) devoted to social interaction, foraging, and other activities. This aspect, which is yet unexplored in animal models, would be highly relevant. The goal is to identify altered time budget possibly analogous to the disruption of personal, professional or financial life, widely reported in human pathological gamblers (DSM-IV-TR, American Psychiatric Association, 2000; Potenza, 2001).
To address the issues mentioned above, different automated social home-cage systems have recently been developed for permanent monitoring of subjects’ operant-choices and spontaneous (social and non-social) behavior (e.g., Adriani et al., 2012b). For instance, the Home-Cage Operant Panels (HOPs, PRS Italia) are new low-cost computer-controlled operant panels (Koot et al., 2009), which can be placed inside the home-cage, enabling rodents to operate it 24 h/day. Operant-choice tasks are particularly interesting to be run during adolescence (Adriani and Laviola, 2003; Adriani et al., 2004), but social deprivation during this ontogenetic period may produce changes in reward sensitivity (Van den Berg et al., 1999), as well as psychotic-like symptoms (Leussis and Andersen, 2008). To solve this problem, Zoratto et al. (2013) recently developed a considerable methodological improvement that allow testing adolescent rats in the home-cage with a task tailored for gambling proneness, while socially living and within the limited span of this developmental phase.
Risky Choices in Nonhuman Primates: Implications for Human Pathological Gambling
Laboratory studies in nonhuman primates can inform the research on human pathological gambling in at least four three ways. First, the behavioral tasks employed in laboratory rodents (see The Probabilistic-Delivery Task (PDT)) may be implemented in non-human primates for studying the psychobiological bases and evolutionary roots of human gambling behavior. Second, the comparison of risk preferences between phylogenetically closely related nonhuman primate species with different ecologies can shed light on the selective pressures that shaped decision-making under risk in the course of the evolution. Third, the study of how nonhuman primates make decisions under risk may provide important information on the contextual and social factors determining the occurrence of similar risky choices in humans. Fourth, since nonhuman primates are our closest relatives, but are not constrained by the socio-cultural system of beliefs and attitudes that characterizes humans, their study may allow to assess whether biases in the making of decisions under risk emerged before the human lineage diverged from the other primates, or whether they are a more recent—and possibly culturally determined—acquisition.
As noted above (see The Ecological Validity of Animal Models of Human (Pathological) Gambling), in studies with nonhuman primates, the term “risk” is typically understood as the frustration of a positive expectation (failure to receive a reward), rather than as the occurrence of a negative event (a loss of valuable resources, or the infliction of physical damage). This happens since the second type of “risk” cannot be implemented in nonhuman primate experiments, mostly due to ethical considerations. However, it is clear that nonhuman primates are exposed, in their own environment, also to “true” risks of the second type (e.g., predation). Note that, in humans, the risks involved in pathological gambling include the loss of job, family, social reputation; in a laboratory model, the appropriate meaning of “risk” should encompass therefore the possibility of overtly adverse outcomes as consequence of “high stakes”. In any case, a comparative approach has much to offer to our understanding of human attitudes towards such “high stakes risks”, once appropriate methodologies for studying them will be developed.
The Probabilistic-Delivery Task (PDT) in the Common Marmoset
The behavioral tasks mentioned in Section Rodents as an Animal Model of Gambling Behavior, used to focus on particular gambling-related aspects, are classically performed in laboratory rodents, primarily rats. However, the implementation of these tasks in species other than rats (that is, non-human primates) may be relevant for studying the psychobiological bases and evolutionary roots of human gambling behavior. Moreover, very little is known about the possibility to run such tasks by means of automated operant panels. This possibility is especially relevant in sight of increasing the ecological validity of these models (see above). The HOPs, originally developed for rodents, have been recently adapted to small non-human primates like the common marmoset (Callithrix jacchus; Adriani et al., 2013). In such a recent experiment, whereby the operandum was adapted for example into hand-poking holes, we showed that HOPs can be reliably exploited to model operant-choice behavior in a delayed-reward setting. The aim of future studies will be to evaluate marmosets as possible models for gambling behavior, using a PDT and drawing a comparison with rats.
The “Ecological Rationality” of Risk Preferences
According to normative economic models, mainly formulated in mathematical terms, rational decision makers should be indifference when choosing between a safe option and a risky option leading on average to the same payoff (e.g., von Neumann and Morgenstern, 1947). In practical terms, this means that a rational decision maker has no reason to prefer either option when offered choice between e.g., a certain, small reward vs. an uncertain, larger one whose size is five-fold and whose probability of delivery is at “p” = 20% (i.e., at the indifference point). However, both human and nonhuman animals are not similar to such “rational” entity, as their instinct will guide their choice towards some kind of a preference: they are generally risk-averse for gains (e.g., Kahneman and Tversky, 1979; Kacelnik and Bateson, 1996), with the notable exception of nonhuman primates, for which the picture is more complicated (Stevens, 2010). To explain this pattern of behavior, it has been proposed that risk-related preferences could reflect the environments in which species evolved and, in particular, their feeding ecology (Heilbronner et al., 2008), leading to “ecologically rational” decisions (Gigerenzer and Todd, 1999). In order to test the above ecological hypothesis, risk preferences were compared in phylogenetically closely related primate species employing two main paradigms.
In the most simple paradigm, the subject is given a series of choices between two options: the “safe” option yields a reward that is constant in amount, whereas the “risky” option yields a reward that varies probabilistically around the mean, with the two options leading on average to the same payoff. Individuals’ attitude towards risk is inferred on the basis of their preference for the safe option (indicating risk aversion), for the risky option (indicating risk seeking) or for neither option (indicating risk neutrality) (Kacelnik and Bateson, 1996, 1997). Bonobos (Pan paniscus) and chimpanzees (Pan troglodytes), two closely related species that evolved behavioral differences possibly as a result of their different ecologies (Wrangham and Pilbeam, 2001), received an experimental schedule whereby they were offered choices between two different upside-down bowls, covering the safe option (always four food items) and the risky option (either one or seven food items with equal probability; Heilbronner et al., 2008). The two species differed markedly in their risk preferences: chimpanzees were risk-seeking, whereas bonobos were risk-averse. Their feeding ecology offers a plausible explanation for this difference: bonobos feed mainly on terrestrial herbaceous vegetation, an abundant and reliable food source, whereas chimpanzees feed primarily on fruit, a more variable food source (Wrangham and Peterson, 1996). Thus, since chimpanzees often rely on more unpredictable food sources than bonobos, this evolutive force may have shaped their behavioral regulations so that to render them tolerant to, if not attracted from, a reward uncertainty. As such, an ecological feature may have led them to be more risk-seeking than their sister species (Heilbronner et al., 2008; Stevens, 2010).
A methodologically similar study conducted on individuals belonging to different lemur species (Lemur catta, Eulemur mongoz, Varecia rubra) showed that, as bonobos, lemurs were clearly risk-averse (MacLean et al., 2012). Subjects were required to choose between two images on a touch-screen, associated to a safe option and to a risky option, respectively. The safe option always led to one food item, whereas the payoff of the risky option varied across two experiments. In a first experiment, the risky option corresponded either to two food items or to zero food items with equal probability (leading to an average payoff of one food item, as the risky option). In a second experiment, the payoff of the risky option was gradually increased across trials up to 7.5 times the safe option. In the first experiment, lemurs strongly preferred the safe option; in the second experiment, half of the subjects switched to risk seeking only when the potential payoff of the uncertain option was at least five times higher than that of the safe option. These results are somewhat puzzling if compared to the findings obtained by Heilbronner et al. (2008) in chimpanzees. However, it can be hypothesized that animals living in a relatively productive environment compared to lemurs, like chimpanzees, can exploit also risky resources, and thus evolve a risk-seeking attitude, without incurring in the danger of starvation. In contrast, for animals living in very harsh environments, like lemurs (that have also evolved several anatomical and behavioral traits as adaptations to their unpredictable habitats; Wright, 1999), risk proneness is not advantageous in the long term and is better to rely on low-quality, yet stable resources (Caraco, 1981; McNamara, 1996).
In a more complex paradigm, Haun et al. (2011) investigated whether, when choosing between a safe and a risky option, the four nonhuman great ape species (Pan paniscus, Pan troglodytes, Gorilla gorilla, and Pongo abelii) make decisions based on the expected value, defined as the probability of receiving the reward multiplied by the amount of the reward. In each trial, subjects choose between a safe option, consisting in a small food item hidden under a yellow cover positioned to the right of the subject, and a risky option, consisting in a large food item put in one of four brown bowls placed in a row in front of the subject and hidden under a blue cover. The probability of receiving the reward was manipulated by increasing the number of blue cups covering the four brown bowls (varying from P = 100%, when one blue cup covered the brown bowl containing the risky option, to P = 25%, when four blue cups covered all the brown bowls), whereas the relative value of the risky option was increased by decreasing the size of the small food item. Overall, apes preferred the risky option, although their preferences were influenced by the expected value. In fact, subjects chose the safe option more often when (i) the safe reward increased in size compared to the risky reward, and (ii) the probability to receive the risky reward decreased. As for species differences, chimpanzees were more risk-seeking than bonobos (as in Heilbronner et al., 2008) also when tested in this more complex paradigm, and orang-utans, whose feeding ecology is somewhat similar to that of chimpanzees (Knott, 1999), were also risk-seeking.
Interestingly, similar differences in risk preferences have been observed in human small-scale societies, possibly as an effect of cultural differences and environmental conditions (Kuznar, 2001; Henrich and McElreath, 2002) that deserves further investigation.
Contextual and Social Factors Affecting Risk Preferences in Nonhuman Primates
Several neurophysiological studies in nonhuman primates have employed risk preference tasks to understand whether single neurons track the subjective value rather than the objective value of a chosen option (McCoy and Platt, 2005; O’Neill and Schultz, 2010; So and Stuphorn, 2010; but see Yamada et al., 2013). In a first study, McCoy and Platt (2005) tested rhesus macaques (Macaca mulatta) in a visual gambling task and measured the activity of single neurons in the posterior cingulate cortex. Macaques were presented with choices between visual targets offering on average the same reward but differing in reward uncertainty. They had to choose whether directing their gaze to a safe target (offering a 150 ms access to fruit juice) or to a risky target (randomly offering either a shorter or longer than 150 ms access to juice, resulting on average in 150 ms access). Overall, monkeys strongly preferred the risky target and its selection increased with the degree of risk, regardless of the internal state of the subjects. Also neuronal activity increased with increasing variance in payoff of the risky option, mirroring the macaques’ risk proneness observed at the behavioral level. Interestingly, macaques continued to prefer the risky option even when the probability of receiving the larger outcome was reduced from 50 to 30% and thus its payoff was smaller than that of the safe option.
In the above study, rhesus macaques were consistently risk-seeking and the same pattern was observed also in subsequent studies carried out by the same Authors and in other neuro-physiological laboratories (Hayden et al., 2008b, 2010; Long et al., 2009; Watson et al., 2009; O’Neill and Schultz, 2010; So and Stuphorn, 2010; Heilbronner et al., 2011; but see Yamada et al., 2013). Interestingly, macaques’ choices are not explained by non-linear utility functions (as proposed by Lee, 2005) since they preferred an uncertain option, in which the delivery of the larger payoff was unpredictable, to an alternating option, in which the delivery of the larger payoff was predictably alternating across trials (Hayden et al., 2008a). Thus, borrowing the distinction between uncertainty and risk favored in the field of behavioral economics (Knight, 1921; Camerer and Weber, 1992; Tversky and Kahneman, 1992), macaques are not only risk prone, but also uncertainty-seeking.
However, not in all conditions do rhesus macaques exhibit a preference for risky options. In fact, when another macaque sample was tested in a risk preference task under different conditions, their behavior ranged from risk aversion to risk neutrality, but none of them was risk-seeking (Behar, 1961). Thus, although rhesus macaques’ ecology may suggest a general predisposition for risk proneness (Goldstein and Richard, 1989; Richard et al., 1989), Heilbronner and Hayden (2013) proposed that macaques’ risk preferences are driven by some features of the task design typically used in neurophysiological studies, such as (i) the small stakes involved in these experiments (typically 0.1–0.3 ml of juice); (ii) the large amount of trials (the same decision problem is typically presented hundreds or thousands of times to the same subject); and (iii) the short intertrial intervals (ITIs).
At least for the latter point, an experiment showed that this might be the case. Whereas in McCoy and Platt (2005), where macaques were risk-seeking, the average ITI was 3 s, in other nonhuman animal studies, where individuals were risk-averse (reviewed in Kacelnik and Bateson, 1996), the ITI was much longer (usually 30 s). Thus, Hayden and Platt (2007) presented rhesus macaques with a novel version of the visual gambling task in which the variance of the risky option was kept constant and the ITI varied from 1 s to 90 s. They found interestingly that, as the ITI increased, macaques’ preference for the risky option decreased and monkeys turned to risk neutrality at 90 s ITI. To explain this pattern, Hayden and Platt (2007) hypothesized that macaques interpreted the risky option as a certain reward available at a future time and, since the higher payoff may occur on the next trial, the subjective expected utility of the risky option depends on the length of the ITI. Interestingly, when humans were tested with a paradigm as similar as possible to that usually employed with macaques, they were more risk-seeking than in typical one-shot gambling experiments employing questionnaires (Hayden and Platt, 2009).
However, the above factors cannot explain the risk-seeking behavior observed in chimpanzees and orangutans (Heilbronner et al., 2008; Haun et al., 2011), where the stakes involved where comparatively high, the number of trials lower, and the ITIs longer than in the macaque studies. Although the results on chimpanzees appears to be very robust and have been replicated with larger samples (Rosati and Hare, 2012, 2013), it cannot generally be excluded that the different risk preferences obtained in the nonhuman primate studies reviewed so far were due to individual differences. In fact, in rhesus macaques, risk sensitivity appears to be partly determined by the serotonergic system: serotonin depletion increases risk proneness (Long et al., 2009), a finding consistent with recent rodent data (Koot et al., 2012). Similarly, the length polymorphisms of the serotonin transporter gene promotor (known as 5-HTTLPR, the serotonin-transporter-linked polymorphic region) is crucial as well (Watson et al., 2009), in relation to interspecific and intraspecific behavioral variability. Wendland and colleagues (2006) found, in macaque species, that the 5-HTTLPR was responsible for interspecific behavioral variability. In contrast, Chakraborty et al. (2010) proposed that this particular polymorphism had a role in intraspecific variability, which in turn may account for the greater ecological success of 5-HTTLPR polymorphic species. An example of its consequences in the wild is represented by the presumed selective emigration of rhesus macaques over the Himalyan Mountains into China in the early history of the species (Champoux et al., 1997; Heinz et al., 1998). According to Belsky et al. (2009), this particular polymorphism may confer an advantage when dealing with novel, possibly hostile environments. Relative to Indian-derived monkeys, Chinese-hybrid macaques with higher prevalence of the long repeat allele of the 5-HTTLPR show predispositions to aggressive and risk-taking behaviors, as well as lower levels of serotonin as indicated via its metabolite (Champoux et al., 1997; Heinz et al., 1998). Nonetheless, although feeding ecology and inter-individual differences are likely to influence risk preferences, the findings obtained in rhesus macaques underline the importance of carefully controlling all task and environmental parameters when comparing risk preferences among different species.
Finally, as observed in humans (Bault et al., 2008; Ermer et al., 2008; Hill and Buss, 2010), another important factor affecting nonhuman primates’ risk preferences seems to be the social context in which the individuals make decisions. To our knowledge, there is only one study evaluating this aspect in nonhuman primates (Rosati and Hare, 2012). Chimpanzees and bonobos were presented with choices between a safe option, yielding an intermediately preferred food item, and a risky option, yielding either a low-preferred or a high-preferred food item, in a competitive context and in a play context. In both contexts an experimenter interacted with the subject before the presentation of the decision-making task: in the competitive context, the experimenter first offered the subject a food item and then, when the subject attempted to take it, immediately pulled it out of the subject’s reach; in the play context, the experimenter tickled or chased the subject. Apes’ behavior in each condition was compared with a neutral context, in which the experimenter was present but not interacting with the subject. All subjects chose the risky option more in the competitive than in the neutral context, whereas the play context did not increase risk proneness. Probably, an eco-ethological explanation is very likely given that feeding competition and consequent loss of resources is a potential problem for all group-living species. In this frame it can be proposed that, in the competitive context, the salience and attractiveness of the larger option would be increased notwithstanding its uncertainty.
The Evolutionary Origins of Biases in Decisions Under Risk
When making choices between risky options, humans show the so-called “reflection effect”, i.e., the tendency to evaluate gambles in relation to an arbitrary reference point. The same individual can decide differently, being risk-seeking when some options are framed as losses and risk-averse when the same, identical options are framed as gains (Kahneman and Tversky, 1979; Tversky and Kahneman, 1981).
Nonhuman animals apparently share with humans the reflection effect and other behavioral biases (e.g., Waite, 2001; Marsh and Kacelnik, 2002; Shafir et al., 2002). This can be either because of an early emergence of economic biases during evolution, or because of convergent evolution. Only the study of nonhuman primates, our closest relatives, can allow to disentangle the topic and select one between these two hypotheses. To this aim, in recent years a series of studies investigated decision-making under risk in capuchin monkeys (Sapajus spp., formerly Cebus apella2) that, despite 35 million year of independent evolution, show many striking analogies with humans in terms of encephalization index, ontogeny, lifespan, and various cognitive traits (Fragaszy et al., 2004).
In a first study (Chen et al., 2006), capuchins were tested in a token exchange task, in which they were provided with a starting budget of 12 tokens that could be exchanged with one of two experimenters, as they preferred. Preliminary experiments demonstrated that capuchins can behave rationally in this framework: when the two experimenters provided the same amount of two equally preferred different food types, capuchins exchanged a similar amount of tokens with each of them; however, when one experimenter doubled the amount of food provided in exchange for one token or showed two food items and delivered either one or two pieces with the same probability, capuchins reliably shifted their preference towards her, showing that they were able to maximize their payoff. In the main experiment, capuchins were presented with choices between experimenters providing a risky “trade” of either one or two food items with equal probability, but the amount of food initially displayed to the subject was different: one experimenter showed one food item and added a “gain” of one additional food item in half of the trials, whereas the other experimenter showed two food items and subtracted a “loss” of one food item in half of the trials. Although the two experimenters provided on average the same payoff, capuchins preferred to exchange their tokens with the first experimenter, although—according to a rational perspective—they should have been indifferent between the two options. These results demonstrate that, as in humans, they chose on the basis of an arbitrary reference point (namely, the initial food amount shown by the two experimenters), therefore preferring the experimenter which was framing the “trade” as a gain.
In a subsequent study (Lakshminarayanan et al., 2011), capuchins were tested with a similar paradigm, presenting them choices between a risky option and a safe option yielding the same average payoff (of two food items) but in two conditions: (i) Losses: both experimenters initially displayed three food items, but the first experimenter always delivered two food items, whereas the second experimenter delivered either one or three food items with equal probability; and (ii) Gains: both experimenters initially displayed one food item, but the first experimenter always delivered two food items, whereas the second experimenter delivered either one or three food items with equal probability. Overall, capuchins showed a clear-cut evidence of the “reflection effect” since they were risk-seeking when options were framed as losses, and risk averse (although to a lesser extent) when options were framed as gains. Again, decisions appear to be made by subjects relative to their initial reference point.
In sum, the above findings suggest that humans and capuchin monkeys share the reflection effect, as is reported with other behavioral biases (Chen et al., 2006; Lakshminarayanan et al., 2008). However, a very recent “up-linkage” replication of Lakshminarayanan et al. (2011), in which adult humans were tested with exactly the same procedure employed with capuchin monkeys, failed to find a reflection effect (Silberberg et al., 2013). Nonetheless, it should be noted that such a replication may have had a low ecological validity for cognitively sophisticated adult humans, especially because of the repeated interactions with the experimenters, which the participants may have found boring or embarrassing. Future studies should investigate biases in decisions under risk in closely-related non-human primate species with different ecologies (Clutton-Brock and Harvey, 1979; Rosati and Stevens, 2009; Rosati and Hare, 2012) in order to understand whether these behavioral patterns are maladaptive, suboptimal, or instead “ecologically rational” (Todd and Gigerenzer, 2000).
Risk Attitudes, Environmental Uncertainty and Addictive Behavior: Perspectives from Computational Neuroscience and Evolutionary Robotics
Computational models are a new way of doing science which can be very useful for theorizing about extremely complex systems like vertebrate organisms and their brains. The usefulness of computational models comes largely from two factors: (i) they express hypotheses in a formal, precise, and unambiguous way, so that from those hypotheses a number of detailed predictions can be unequivocally derived which can then be tested through empirical experimentation; (ii) they allow for a degree of direct manipulation on all relevant variables which is unparallelled by naturalistic methods.
The vast majority of computational models deal with the normal functioning of the brain and normal cognitive phenomena, but since the 1990s a number of models have been proposed that address psychiatric and neurological disorders, and recently these models have been raising increasing interest, so that several scholars started to discuss the prospects, challenges, and limitations of computational psychiatry (Maia and Frank, 2011; Montague et al., 2012; Huys, 2013). There are many ways in which computational models may help research on decision-making in general and pathological gambling more in particular. Here, we will focus on three different kinds of models: (1) normative (algorithmic) models; (2) neural models; and (3) evolutionary robotics models.
Normative (Algorithmic) Models
A first class of relevant models is what we can call “normative” or “algorithmic” models. These models derive from the computational reinforcement learning literature (Sutton and Barto, 1998) and are normative because they are based on machine learning algorithms, which prescribe how an agent should behave in order to maximize its payoff with future rewards. They became famous in the mid 1990s when it was discovered that the dynamics of dopamine, which is highly involved in motivation and learning (Wise, 2004; Schultz, 2006; Berridge, 2007), as well as in drug addiction, could be modeled by the reward prediction error signal postulated in Temporal Difference (TD) reinforcement learning (Barto, 1995; Schultz et al., 1997). The reward prediction error of TD learning is a signal that quantifies “surprise”, that is, the difference between expected and actual rewards, and it is used in reinforcement learning models as the learning signal that drives action learning. In a nutshell, the theory holds that an agent continually evaluates the current states (situations) with respect to the reward that it expects to achieve in those states. If it gets more reward than expected, then a prediction error signal is generated that is used to update both its prediction and its action policy, that is the way the animal selects its actions. The idea is that the probability to select an action again, in a given context, is increased if that action leads to more rewards than expected and is decreased if it leads to less reward than expected. Dopamine behaves just as the reward prediction error: its release is triggered by unexpected rewards or unexpected stimuli that predict reward but it is not released when the reward is perfectly predictable and it is inhibited (a deep in dopamine levels occurs) when an expected reward is omitted. This has led to conclude with the hypothesis that dopamine plays the same function of the reward prediction error, within phenomena of reinforcement. Phasic dopamine release would have the role of making the agent learn (1) the value (“saliency”) of the stimuli and (2) which are the actions (“strategies”) to be deployed in each circumstance in order to maximize future rewards. In mammals, these two roles are attributed to mesolimbic vs. nigrostriatal dopamine pathways, respectively. This theory has guided an enormous amount of empirical research and has received so much empirical support that it is now an important tenet of contemporary neuroscience, and it has become one of the most successful examples of using computational models in the behavioral and brain sciences (e.g., Montague et al., 2004; Ungless, 2004; Wise, 2004; Sugrue et al., 2005; Graybiel, 2008; Glimcher, 2011).
What is most interesting for our purposes is that the reward prediction error hypothesis for dopamine has not only been used to predict and explain behavioral and brain dynamics in normal conditions, but also to explain pathological phenomena. In particular, normative algorithmic models have been used to interpret brain imaging data related to various mental pathologies like schizophrenia and depression-related anhedonia (Smith et al., 2007; Kumar et al., 2008; Murray et al., 2008; Huys et al., 2013).
Moreover, a seminal work by David Redish (2004) used a TD model to explain drug addiction. In particular, the model explained addiction as the consequence of the pharmacological effect that certain drugs of abuse, like amphetamines, cocaine or nicotine, may have on forebrain dopamine circuits. Indeed, these drugs are known to increase dopamine levels upon acute administration. According to Redish’s model, the addictive effect of these drugs is associated to specific consequences, due to the dopamine elevation produced by the drug. With natural rewards, a phasic release of dopamine is present only when the reward is not predicted, unexpected. In this perspective, the normal process of reinforcement, produced by any reward, can be cancelled out by accurate predictions. On the contrary, the model postulates that drugs of abuse generate also a pharmacologically-induced dopamine release, a term that cannot be compensated by predictions. Since, in this way, the dopamine prediction error never disappears, as if drug-related pleasure is always “unexpected”, the subjective values of the drug related internal states will keep on increasing indefinitely, and the actions that lead to the drug consumption keep on being reinforced, hence becoming a strong habit and thus ultimately resulting in the development of addiction. This model explains several aspects of addiction including, for example, the fact that both drugs and natural rewards are sensitive to effort-related cost, but the reward provided by drugs is much less sensitive than that given by natural rewards. However, one of the key predictions of the theory has been falsified by subsequent research. In particular, the theory predicted that drugs should prevent blocking, i.e., the phenomenon for which a stimulus that predicts a reward, if paired with a new stimulus before presenting the reward, prevents the second stimulus to be conditioned as it stops the learning-inducing dopamine prediction error from occurring. If a drug always produced a dopamine prediction error, as postulated by Redish’s model, then the conditioning of the second stimulus should occur, but it does not (Panlilio et al., 2007).
Building on this computational interpretation of drug addiction, Redish et al. (2007) proposed a model that provides a possible explanation of pathological gambling. This model adds to the basic TD prediction error model, which learns the values of states and actions, a second “situation recognition” system that learns to categorize the states. In particular, this system learns to categorize as different states all those situations in which, after having received high rewards, those rewards are not present anymore. Noteworthy, this addition was done to accommodate in the TD framework basic reinforcement learning phenomena related to the extinction of behaviors and their renewal. However, it provides also an explanation of gambling. Indeed, many pathological gamblers became addict after having experienced an unlikely sequence of wins or a single very high win (Custer, 1984; but see Kassinove and Schare, 2001, for empirically founded doubts on the strength of this big win effect). The model assumes that, when the gambler experiences such a huge success (or the feeling to have almost succeeded, the so called “near miss” effect; Kassinove and Schare, 2001), he forms a very strong and unrealistic expectation that he can win again (or finally; on the similarity in neural processing of wins and near misses, see Chase and Clark, 2010; Winstanley et al., 2011). When the gambler starts to loose, instead of unlearning and cancelling this (false) expectation, by negative reinforcement, his situation recognition system starts to create new “associative” states, namely looking for cues that are supposed to distinguish the winning situation against the loosing ones. Hence, according to this model, pathological gambling results from a misclassification of the situation, with the irrational belief that there are contingencies in which the gambler can win as different from those where he looses. This explanation can account also for two related phenomena: (1) the “hindsight bias” effect, where gamblers analyze their losses and (post-hoc) identify which are the cues that differed from the situation when they won, as well as (2) the “illusion of control” phenomenon, in which they believe that they can control an otherwise random situation by identifying and following the right cues that, in their mind, distinguish winning from losing situations (Custer, 1984; Wagenaar, 1988). The most common superstitions of pathological gamblers are thus accounted for.
One limitation of this model is that it tries to explain pathological gambling as a unitary phenomenon with a unique cause, while it is likely that there might be several different causes that underlie this complex behavior, both in the same individual and across different individuals. For example, many pathological gamblers keep on gambling even if they report knowing that they will loose, something that is in contrast with the model (but see the results on cue-induced secondary rewards in rodents and their potential implications for human gambling, discussed in Section Assessment of Gambling Proneness: Clinical and Preclinical Approaches). However, the most important limit of this kind of normative, algorithmic models is that they provide abstract explanations on what computations may go awry in pathological conditions, but they do not explain which are the actual brain mechanisms that may underlie these phenomena: hence the range of phenomena that they can account for and predict is limited. In order to investigate the details of the brain processes that are the basis of the phenomena of study, we need models that simulate those details. This is the province of neural models.
Neural Models
Neural models explain a cognitive phenomenon by simulating (with a variable degree of abstraction) neurons and their connections, and making the simulated neural network reproduce the phenomenon. The first models of this kind were called “connectionist” models (McClelland and Rumelhart, 1989): they included very simple neural networks, which were supposed to perform computations in a brain-like manner, but whose structure was not meant to replicate the structure of real brains. More recently, much more biologically realistic models have been developed in computational neuroscience. In these models, different groups of nodes are meant to represent neurons belonging to different parts of the brain, and the connections between the different groups correspond to the connections between those brain areas. The architecture and functioning of the model are thus based on the anatomy and physiology of the same brain areas that are known to be relevant for the phenomenon under study. If the model is able to reproduce the phenomenon, this would give us a detailed explanation on what brain mechanisms may be responsible for it. The plausibility of such an explanation rests on two foundations: (i) how many anatomical and physiological constrains are considered, and how much they are respected; and (ii) how many different phenomena the model is able to account for. Furthermore, the model can be used to derive a number of predictions that can then be tested in humans as well as animal models, through further empirical experiments.
To the best of our knowledge, no neural models have been developed so far to explain pathological gambling, although there is evidence of a role of midbrain dopamine in the coding of reward uncertainty (Fiorillo et al., 2003), thus suggesting an influence of the dopaminergic system on risk-taking behavior. On the other hand, several models, both connectionist (e.g., Cohen and Servan-Schreiber, 1992; Cohen et al., 1996; Braver et al., 1999) and biologically detailed ones (Frank et al., 2004, 2007a,b,c; Gutkin et al., 2006; Waltz et al., 2007; Rolls et al., 2008; Ahmed et al., 2009; Maia and Frank, 2011), have been developed to describe neurological and psychiatric pathologies, including schizophrenia, Parkinson, Tourette’s syndrome, ADHD, and drug addiction. Briefly reviewing these existing models can provide useful suggestions on how to apply the same methods to the investigation of pathological gambling.
Most of these models deal with the dopaminergic system and its interactions with the basal-ganglia-thalamo-cortical circuits that implement action selection. A notable example is the work of Frank and colleagues on modeling several aspects of Parkinson disease (e.g., Frank et al., 2004, 2007a; Moustafa et al., 2008). Parkinson disease is known to depend on the degeneration of nigro-striatal dopamine cells. This work is based on a detailed model of the basal ganglia-thalamo-cortical circuit that is assumed to implement action selection and reinforcement learning (e.g., Frank et al., 2001). The main idea behind the model is that two sub-systems, a Go and a no-Go system, are present in the basal ganglia, which together implement action selection. In particular, neurons in the basal ganglia are supposed to allow the release of actions in the cortex by selectively disinhibiting a certain action (through the Go system) while inhibiting the others (through the no-Go system). Furthermore, a third structure of the basal ganglia (the subthalamic nucleus) is supposed to dynamically exert a global inhibitory role and to modulate the threshold at which actions are selected depending on the level of cortical conflict. Importantly, neurons belonging to the different systems have different dopamine receptors distributions, with Go neurons having receptors which make dopamine excite the neuron and no-Go neurons that have receptors which make dopamine inhibit the neuron. Through such a model, Frank and colleagues have been able to reproduce and explain a number of detailed behavioral and neural data, and to predict new data that have been empirically verified, such as the effects of dopaminergic medication and of deep brain stimulation of the subthalamic nucleus (a procedure that is known to improve motor symptoms) on different cognitive tasks in Parkinson patients (Frank et al., 2007a), and why medication can lead those patients to develop pathological gambling (Dodd et al., 2005).
In order to explain other facets of this complex behavior and its neural basis, many more details should be added to these models. For example, pathological gambling is known to be associated with dysfunction not only of dopamine, but also of other neuromodulators like serotonin (e.g., Nordin and Eklundh, 1999) and noradrenaline (e.g., Meyer et al., 2004). For this reason, the role of these two neuromodulators should be modeled in future research, possibly by incorporating findings from other computational models that deal with the interactions between these neuro-modulators and dopamine (e.g., Daw et al., 2002). Furthermore, beyond the anomalies in the basal-ganglia and in associated fronto-cortical areas, recent evidence suggests that also deficits in amygdala functioning may be responsible for gambling behavior by significantly reducing loss aversion (De Martino et al., 2010). For this reason, modeling pathological gambling may require modeling the interactions between the amygdala and the basal-ganglia, as done in recent neuro-robotic models of the role of amygdala in conditioning (Mannella et al., 2007, 2008, 2010; Mirolli et al., 2010).
Finally, also factors related to intrinsic motivations (i.e., motivations related to novelty, surprise, and competence acquisition: Ryan and Deci, 2000; Baldassarre and Mirolli, 2013) may play a role in pathological gambling. For example, Parkinson patients that develop pathological gambling are distinguished from those that do not in tests that measure impulsivity and novelty seeking (Voon et al., 2007). Recent computational models assume that intrinsic motivations work by hijacking the neural brain systems that underlie also extrinsic motivations, and in particular the dopaminergic system and the action selection system in the basal-ganglia (e.g., Kakade and Dayan, 2002; Mirolli et al., 2013). Some of these models are detailed neural models very similar to the ones discussed above on dopamine in Parkinson, including basal-ganglia-thalamo-cortical circuits, the dopaminergic system, and other relevant areas (e.g., Baldassarre et al., 2013; Fiore et al., 2014). Merging the two kinds of models may be a promising way to further understand the brain mechanisms underlying pathological gambling.
Evolutionary Robotics Models
Evolutionary robotics provide a valuable platform to test evolutionary hypotheses on the ecological pressures behind the emergence of specific behaviors and traits. Such hypotheses, like those already discussed in Sections Rodents as an Animal Model of Gambling Behavior and Risky Choices in Nonhuman Primates: Implications for Human Pathological Gambling with respect to risk attitudes, are often plausible, but also hard to verify directly. They rely on key assumptions about the environment in which the evolution of a given species occurred, and yet it is typically hard to observe with precision the effects of a given ecological variable (e.g., dangers of predation) on the behavior under study (e.g., risk proneness/aversion). Moreover, these assumptions refer to ancestral environments, not present-day ecologies: while there are methods to acquire data on living conditions in ancestral times (e.g., through paleobiology and primate archeology; Haslam et al., 2009), they are bound to deliver incomplete information at best, in spite of substantial research efforts. Recent work has demonstrated the viability and fruitfulness of computational methods, e.g., experimental evolutionary robotics: the basic idea is to let populations of simulated robots evolve under specific ecological pressures, and then observe their behavior with the aim of drawing implications for the understanding of processes in natural organisms faced by similar, uncertainty-based tasks (Da Rold et al., 2011; Saglimbeni and Parisi, 2011). This approach allows to observe how several forms of risk introduced in the evolutionary environment affect choice behavior, both in ecology and in experimental settings.
Moreover, robots are controlled by simple neural networks, whose evolution and effects on behavior can be studied with extreme precision and flexibility: not only recording their activity during behavior, but also “lesioning” a well-adapted neural network and observing the impact on risk-related choices, hence drawing new insights into pathological gambling. These are all key advantages of computational evolutionary models, as opposed to purely mathematical and game-theoretical approaches, for putting forward hypotheses regarding the evolution of certain aspects of risk attitudes in uncertain environments (e.g., McNamara et al., 2013). While mathematical and theoretical models certainly provide valuable contributions to breach the gap between laboratory studies and ecological observations, they lack the opportunities for direct manipulation and experimental observation granted instead by robotics platforms, be they purely simulated or physically implemented.
To the best of our knowledge, no evolutionary (computational) model of pathological gambling have yet been proposed. However, there are several interesting simulations on how risk attitudes in general might have evolved: some of these works have already important implications for our understanding of gambling behavior, and points towards promising research directions. For instance, Niv et al. (2002) used evolutionary computation techniques to evolve near-optimal neuronal learning rules in a simple neural network model of reinforcement learning in bumblebees foraging for nectar. This resulted in a replication of two well-documented choice strategies in these animals: risk aversion and probability matching. Moreover, risk aversion evolved even in a completely risk-less environment. These results suggest that risk-aversion may be a direct consequence of near-optimal reinforcement learning, with no need to assume further evolutionary constraints, such as the existence of a nonlinear subjective utility function for rewards. Their results were also demonstrated in real-world situations, using experiments in a Kephera wheeled robot, and they dovetail nicely with the evidence on the role of the reward prediction error in determining various choice behaviors (see Section Normative (Algorithmic) Models).
Other models do not explicitly focus on any particular species, but rather try to address general issues pertaining the evolution of risk attitudes. Arbilly et al. (2011) used agent-based evolutionary simulations to investigate an important connection between environmental features, risk-aversion, and the evolution of social learning. They started from the observation that, in environments with significant risks associated to higher value rewards (e.g., an ecology in which the most valuable food is rare and difficult to obtain), the possibility of acquiring such rewards is most likely to require a certain number of failed attempts, before success is achieved. In these circumstances, risk-aversion would lead to neglect such rewards, even if doing so may be sub-optimal in the long run (Real, 1991). However, Arbilly and colleagues noted that this situation also create an important (and often overlooked) evolutionary advantage to social learning over individual learning, since social learners can by-pass the problem of risk aversion by learning where to forage from individuals that have already found food. The results of their evolutionary simulations, which combined a producer–scrounger game with explicit individual and social learning rules for associating different food patch types with experienced reward, confirmed the key role of social learning in similar situations, as an antidote to the adverse effects of risk-aversion in this type of environment. Incidentally, this also provides an explanation to why many species, humans included, continue to rely heavily on social learning even when it produces disastrous effects, e.g., in escape panic scenarios (Helbing et al., 2000). And it also illustrates how this reliance on social learning can be used to produce “contagious gambling”: this is precisely what happens when con-artists and casinos employ confederates who (falsely) win huge sums, in order to lure unsuspecting potential gamblers into the game.
While the number of computational evolutionary models of risk attitudes is still too limited to permit any universal conclusions on the evolution of this complex suite of behaviors, some important methodological implications stand out, and are worth noticing. This methodology has in fact both advantages and limitations, but what matters is that they tend to be complementary to those exhibited by naturalistic methods. Thus, integrating evolutionary simulations with naturalistic studies has the potential for huge scientific payoffs. With respect to experimental evolutionary robotics (Da Rold et al., 2011; Saglimbeni and Parisi, 2011), advantages of this method include the following ones. First, full observability means that robots’ behavior can be observed in extreme detail both “in the wild” (i.e., in the ecological setting where robots evolve), and “in the lab” (i.e., under specific test conditions). Second, there is full control, meaning that all variables can be easily and precisely manipulated, regarding both ecology and test conditions, including the possibility of “counterfactual experiments” (that is, studying how ecological pressures for which no natural correlate is known might affect behavior). Third, there is neurocomputational transparency, in that also the internal dynamics of the robots’ control system (e.g., a neural network) are precisely measured (which is not entirely the case for natural, alive organisms). Fourth, individual differences emerge, since robots differ in how they cope with their ecology and in their level of proficiency (also, opening the way to the study of artificial pathologies). Interestingly, non-deterministic responses are present, since evolutionary robots are typically responding in a non-deterministic way, with respect to external stimuli, facilitating comparison with natural, alive organisms (who also do not react always in the same way to identical inputs from the environment). Finally, a potential exists for embodied implementation, since simulated robots are based on simulators of real physical platforms, thus allowing easy implementation in real-world scenarios.
In contrast, the method is mostly vulnerable to the following problems and limitations. First, abstraction, since both the ecology and the artificial laboratory are much simpler than most natural counterparts (and the same is true for the structure of the robot’s body and its control system). Second, there is much arbitrariness, since a huge variety of parameters needs to be set by the experimenter, concerning both the ecology, the robot’s structure, and the test conditions (and these are likely to have some impact on the resulting behavior). Last, there is need to start small; however, given the number of variables directly controlled by the experimenter and the amount of data obtained, a scalar approach is unavoidable (to understand the results). As mentioned, however, most of these drawbacks can be easily overcome, by allying computational evolutionary models with naturalistic studies (see Sections Risky Choices in Nonhuman Primates: Implications for Human Pathological Gambling and Risk Attitudes, Environmental Uncertainty and Addictive Behavior: Perspectives from Computational Neuroscience and Evolutionary Robotics).
Conclusions
In this review, we first discussed how the development of refined operant protocols, to reproduce and to evaluate the gambling proneness phenotype in animal models, is fundamental to increase our understanding of the neurobiological determinants underlying the etiology of pathological gambling and/or to develop new treatment strategies. Then, we surveyed the role of comparative studies on choice behavior in other species, in particular in nonhuman primates, for informing us on the evolutionary origins and cognitive underpinnings of human attitudes towards risk and uncertainty. Finally, we summarized various ways in which computational models can be of assistance in the study of gambling behaviors: while results in this area are still preliminary, we were able to point out several substantial indications originated from combining naturalistic observations and artificial modeling.
Reviewing such diverse studies together is meant to impact on the methodology of future gambling research: while looking at each of these three rich areas of research in isolation is certainly useful, the potential emerging benefits are only compounded by integrating all these methods together. What one learns from an animal model (about the neurobiological underpinnings of pathological gambling) should immediately be verified via computational techniques, and the further predictions generated by that computational model should be tested empirically in natural, alive organisms. Similarly, any evolutionary hypothesis on what adaptive pressures shaped risk attitudes, and generated (possibly as a by-product) gambling behavior, should be verified via computational evolutionary models, which in turn should be informed by naturalistic data coming from ethological studies. Only by bringing to the table both human and nonhuman gamblers, we shall understand what makes us so vulnerable to such a self-destructive behavioral pattern.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
Fabio Paglieri and Elsa Addessi received funding for this research by an ISTC-CNR intramural grant and by an American Society of Primatologists General Small Grant. Elsa Addessi also gratefully acknowledge the support of the PNR/CNR Aging Program 2012-2014. Walter Adriani received support for this research as Principal Investigator of the ERA-net “NeuroGenMRI”, and Giovanni Laviola and Walter Adriani received funding by the project “GAMBLING—Fattori psicobiologici alla base di comportamenti di ricerca del rischio, disturbi nel controllo degli impulsi e gioco d’azzardo patologico” from the Department of Antidrug Policies, Presidency of the Council of Ministers (Italy).
Footnotes
- ^ Another common way of distinguishing between risk and uncertainty is in terms of how measurable the odds are: Knight (1921) proposed to consider as “risky” those choices were the odds are measurable and known to the subject, whereas the term “uncertainty” should be reserved for probabilistic outcomes with unknown odds. While this distinction has become canonical in behavioral economics (e.g., Camerer and Weber, 1992; Tversky and Kahneman, 1992), its application to animal studies is highly problematic, due to obvious difficulties in establishing how much the odds are known (that is, precisely understood and quantitatively assessed) by experimental subjects.
- ^ Recent molecular analysis has revealed that capuchin monkeys, formerly identified as the single genus Cebus, are two genera, with the robust (tufted) forms (including libidinosus, xanthosternos, apella and several other species) now recognized as the genus Sapajus, and the gracile forms retained as the genus Cebus (Lynch Alfaro et al., 2012). The nomenclature for Sapajus is registered with ZooBank (urn:lsid:zoobank.org:act:3AAFD645-6B09-4C88-B243-652316B55918). Animals identified as Cebus apella in laboratory colonies outside of South America may be any combination of the several species (e.g., C. apella, C. libidinosus, C. nigritus) recognized as separate species since 2001 (Groves, 2001; Fragaszy et al., 2004), but previously considered C. apella.
References
Adriani, W., and Laviola, G. (2003). Elevated levels of impulsivity and reduced place conditioning with d-amphetamine: two behavioral features of adolescence in mice. Behav. Neurosci. 117, 695–703. doi: 10.1037/0735-7044.117.4.695
Adriani, W., and Laviola, G. (2006). Delay aversion but preference for large and rare rewards in two choice tasks: implications for the measurement of self-control parameters. BMC Neurosci.7:52. doi: 10.1186/1471-2202-7-52
Adriani, W., and Laviola, G. (2009). “Animal models and mechanisms of impulsivity,” in Adolescence In Cerebro Y Mente, eds T. Palomo, R. Beninger, T. Archer and R. Kostrezwa (Madrid: Editorial CYM), 385–434.
Adriani, W., Boyer, F., Gioiosa, L., Macri, S., Dreyer, J. L., and Laviola, G. (2009). Increased impulsive behavior and risk proneness following lentivirus-mediated dopamine transporter over-expression in rats’ nucleus accumbens. Neuroscience 159, 47–58. doi: 10.1016/j.neuroscience.2008.11.042
Adriani, W., Boyer, F., Leo, D., Canese, R., Podo, F., Perrone-Capano, C., et al. (2010). Social withdrawal and gambling-like profile after lentiviral manipulation of DAT expression in the rat accumbens. Int. J. Neuropsychopharmacol. 13, 1329–1342. doi: 10.1017/s1461145709991210
Adriani, W., Leo, D., Greco, D., Rea, M., di Porzio, U., Laviola, G., et al. (2006). Methylphenidate administration to adolescent rats determines plastic changes on reward-related behavior and striatal gene expression. Neuropsychopharmacology 31, 1946–1956. doi: 10.1038/sj.npp.1300962
Adriani, W., Rea, M., Baviera, M., Invernizzi, W., Carli, M., Ghirardi, O., et al. (2004). Acetyl-L-carnitine reduces impulsive behaviour in adolescent rats. Psychopharmacology (Berl) 176, 296–304. doi: 10.1007/s00213-004-1892-9
Adriani, W., Romani, C., Manciocco, A., Vitale, A., and Laviola, G. (2013). Individual differences in choice (in)flexibility but not impulsivity in the common marmoset: an automated, operant-behavior choice task. Behav. Brain Res. 256, 554–563. doi: 10.1016/j.bbr.2013.09.001
Adriani, W., Zoratto, F., and Laviola, G. (2012a). Brain processes in discounting: consequences of adolescent methylphenidate exposure. Curr. Top. Behav. Neurosci. 9, 113–143. doi: 10.1007/7854_2011_156
Adriani, W., Zoratto, F., and Laviola, G. (2012b). “Home-cage testing of choice behaviour: proneness to risk in a gambling task,” in Psychology of Gambling: New Research, ed A. E. Cavanna (Hauppauge, NY: Nova Science Publishers), 3–92.
Ahmed, S. H., Graupner, M., and Gutkin, B. (2009). Computational approaches to the neurobiology of drug addiction. Pharmacopsychiatry 42(Suppl. 1), 144–152. doi: 10.1055/s-0029-1216345
American Psychiatric Association. (2000). Diagnostic and Statistical Manual of Mental Disorders. (4th Edn., text rev.) Washington, DC, USA: American Psychiatric Publishing.
American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders. 5th Edn. Arlington, VA: American Psychiatric Publishing.
Anselme, P. (2012). Loss in risk-taking: absence of optimal gain or reduction in one’s own resources? Behav. Brain Res.229, 443–446. doi: 10.1016/j.bbr.2012.01.032
Arbilly, M., Motro, U., Feldman, M. W., and Lotem, A. (2011). Evolution of social learning when high expected payoffs are associated with high risk of failure. J. R. Soc. Interface 8, 1604–1615. doi: 10.1098/rsif.2011.0138
Baldassarre, G., and Mirolli, M. (2013). Intrinsically Motivated Learning in Natural and Artificial Systems. Berlin: Springer-Verlag.
Baldassarre, G., Mannella, F., Fiore, V. G., Redgrave, P., Gurney, K., and Mirolli, M. (2013). Intrinsically motivated action-outcome learning and goal-based action recall: a system-level bio-constrained computational model. Neural Netw. 41, 168–187. doi: 10.1016/j.neunet.2012.09.015
Barto, A. (1995). “Adaptive critics and the basal ganglia,” in Models of Information Processing in the Basal Ganglia, eds J. Houk, J. C. Davis and D. Beiser (Cambridge: MIT Press), 215–232.
Bastiani, L., Gori, M., Colasante, E., Siciliano, V., Capitanucci, D., Jarre, P., et al. (2013). Complex factors and behaviors in the gambling population of Italy. J. Gambl. Stud. 29, 1–13. doi: 10.1007/s10899-011-9283-8
Bault, N., Coricelli, G., and Rustichini, A. (2008). Interdependent utilities: how social ranking affects choice behavior. PLoS One 3:e3477. doi: 10.1371/journal.pone.0003477
Bechara, A., Damasio, A. R., Damasio, H., and Anderson, S. W. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition 50, 7–15. doi: 10.1016/0010-0277(94)90018-3
Behar, I. (1961). Learned avoidance of nonreward. Psychol. Rep. 9, 43–52. doi: 10.2466/pr0.1961.9.1.43
Belsky, J., Jonassaint, C., Pluess, M., Stanton, M., Brummett, B., and Williams, R. (2009). Vulnerability genes or plasticity genes? Mol. Psychiatry 14, 746–754. doi: 10.1038/mp.2009.44
Berridge, K. (2007). The debate over dopamine’s role in reward: the case for incentive salience. Psychopharmacology (Berl) 191, 391–431. doi: 10.1007/s00213-006-0578-x
Braver, T. S., Barch, D. M., and Cohen, J. D. (1999). Cognition and control in schizophrenia: a computational model of dopamine and prefrontal function. Biol. Psychiatry 46, 312–328. doi: 10.1016/s0006-3223(99)00116-x
Caillon, J., Grall-Bronnec, M., Bouju, G., Lagadec, M., and Venisse, J. L. (2012). Pathological gambling in adolescence. Arch. Pediatr. 19, 173–179. doi: 10.1016/j.arcped.2011.11.020
Camerer, C., and Weber, M. (1992). Recent developments in modelling preferences: uncertainty and ambiguity. J. Risk Uncertain. 5, 325–370. doi: 10.1007/bf00122575
Caraco, T. (1981). Energy budgets, risk and foraging preferences in dark-eyed juncos (Junco hyemalis). Behav. Ecol. Sociobiol. 8, 213–217.
Cardinal, R. N., and Howes, N. J. (2005). Effects of lesions of the nucleus accumbens core on choice between small certain rewards and large uncertain rewards in rats. BMC Neurosci. 6:37. doi: 10.1186/1471-2202-6-37
Cardinal, R. N., Winstanley, C. A., Robbins, T. W., and Everitt, B. J. (2004). Limbic corticostriatal systems and delayed reinforcement. Ann. N Y Acad. Sci. 1021, 33–50. doi: 10.1196/annals.1308.004
Carragher, N., and McWilliams, L. A. (2011). A latent class analysis of DSM-IV criteria for pathological gambling: results from the National Epidemiologic Survey on alcohol and related conditions. Psychiatry Res. 187, 185–192.
Chakraborty, S., Chakraborty, D., Mukherjee, O., Jain, S., Ramakrishnan, U., and Sinha, A. (2010). Genetic polymorphism in the serotonin transporter promoter region and ecological success in macaques. Behav. Genet. 40, 672–679. doi: 10.1007/s10519-010-9360-2
Champoux, M., Higley, J. D., and Suomi, S. J. (1997). Behavioral and physiological characteristics of Indian and Chinese-Indian hybrid rhesus macaque infants. Dev. Psychobiol. 31, 49–63. doi: 10.1002/(SICI)1098-2302(199707)31:1<49::AID-DEV5>3.0.CO;2-U
Chase, H. W., and Clark, L. (2010). Gambling severity predicts midbrain response to near-miss outcomes. J. Neurosci. 30, 6180–6187. doi: 10.1523/JNEUROSCI.5758-09.2010
Chen, M. K., Lakshminaryanan, V., and Santos, L. R. (2006). The evolution of our preferences: evidence from capuchin monkey trading behavior. J. Polit. Econ. 114, 517–537. doi: 10.1086/503550
Chesler, E. J., Wilson, S. G., Lariviere, W. R., Rodriguez-Zas, S. L., and Mogil, J. S. (2002). Influences of laboratory environment on behavior. Nat. Neurosci. 5, 1101–1102. doi: 10.1038/nn1102-1101
Christakou, A., Robbins, T. W., and Everitt, B. J. (2004). Prefrontal cortical-ventral striatal interactions involved in affective modulation of attentional performance: implications for corticostriatal circuit function. J. Neurosci. 24, 773–780. doi: 10.1523/jneurosci.0949-03.2004
Clark, L. (2010). Decision-making during gambling: an integration of cognitive and psychobiological approaches. Philos. Trans. R. Soc. Lond. B Biol. Sci. 365, 319–330. doi: 10.1098/rstb.2009.0147
Clutton-Brock, T. H., and Harvey, P. H. (1979). Comparison and adaptation. Proc. R. Soc. Lond. B Biol. Sci. 205, 547–565.
Cocker, P. J., Le Foll, B., Rogers, R. D., and Winstanley, C. A. (2013). A selective role for Dopamine D4 receptors in modulating reward expectancy in a rodent slot machine task. Biol. Psychiatry doi: 10.1016/j.biopsych.2013.08.026. [Epub ahead of print].
Cohen, J. D., and Servan-Schreiber, D. (1992). Context, cortex and dopamine: a connectionist approach to behavior and biology in schizophrenia. Psychol. Rev. 99, 45–77. doi: 10.1037/0033-295x.99.1.45
Cohen, J. D., Braver, T. S., and O’Reilly, R. C. (1996). A computational approach to prefrontal cortex, cognitive control and schizophrenia: recent developments and current challenges. Philos. Trans. R. Soc. Lond. B Biol. Sci. 351, 1515–1527.
Comings, D. E., Gade-Andavolu, R., Gonzalez, N., Wu, S., Muhleman, D., Chen, C., et al. (2001). The additive effect of neurotransmitter genes in pathological gambling. Clin. Genet. 60, 107–116. doi: 10.1034/j.1399-0004.2001.600204.x
Crabbe, J. C., Wahlsten, D., and Dudek, B. C. (1999). Genetics of mouse behavior: interactions with laboratory environment. Science 284, 1670–1672. doi: 10.1126/science.284.5420.1670
Cunningham-Williams, R. M., and Cottler, L. B. (2001). The epidemiology of pathological gambling. Semin. Clin. Neuropsychiatry 6, 155–166. doi: 10.1053/scnp.2001.22919
Da Rold, F., Petrosino, G., and Parisi, D. (2011). Male and female robots. Adapt. Behav. 19, 317–334. doi: 10.1177/1059712311417737
Dalley, J. W., Theobald, D. E., Eagle, D. M., Passetti, F., and Robbins, T. W. (2002). Deficits in impulse control associated with tonically-elevated function in rat serotonergic prefrontal cortex. Neuropsychopharmacology 26, 716–728. doi: 10.1016/s0893-133x(01)00412-2
Daw, N. D., Kakade, S., and Dayan, P. (2002). Opponent interactions between serotonin and dopamine. Neural Netw. 15, 603–616. doi: 10.1016/s0893-6080(02)00052-7
De Martino, B., Camerer, C., and Adolphs, R. (2010). Amygdala damage eliminates monetary loss aversion. Proc. Natl. Acad. Sci. U S A 107, 3788–3792. doi: 10.1073/pnas.0910230107
de Visser, L., Homberg, J. R., Mitsogiannis, M., Zeeb, F. D., Rivalan, M., Fitoussi, A., et al. (2011). Rodent versions of the iowa gambling task: opportunities and challenges for the understanding of decision-making. Front. Neurosci. 5:109. doi: 10.3389/fnins.2011.00109
de Visser, L., van den Bos, R., Kuurman, W. W., Kas, M. J., and Spruijt, B. M. (2006). Novel approach to the behavioural characterization of inbred mice: automated home cage observations. Genes Brain Behav. 5, 458–466. doi: 10.1111/j.1601-183x.2005.00181.x
Dickson, L. M., Derevensky, J. L., and Gupta, R. (2002). The prevention of gambling problems in youth: a conceptual framework. J. Gambl. Stud. 18, 97–159. doi: 10.1023/A:1015557115049
Dodd, M. L., Klos, K. J., Bower, J. H., Geda, Y. E., Josephs, K. A., and Ahlskog, J. E. (2005). Pathological gambling caused by drugs used to treat Parkinson disease. Arch. Neurol. 62, 1377–1381. doi: 10.1001/archneur.62.9.noc50009
Donati, M. A., Chiesi, F., and Primi, C. (2013). A model to explain at-risk/problem gambling among male and female adolescents: gender similarities and differences. J. Adolesc. 36, 129–137. doi: 10.1016/j.adolescence.2012.10.001
Ellenbogen, S., Derevensky, J., and Gupta, R. (2007). Gender differences among adolescents with gambling-related problems. J. Gambl. Stud. 23, 133–143. doi: 10.1007/s10899-006-9048-y
Ermer, E., Cosmides, L., and Tooby, J. (2008). Relative status regulated risky decision making about resources in men: evidence for the co-evolution of motivation and cognition. Evol. Hum. Behav. 29, 106–118. doi: 10.1016/j.evolhumbehav.2007.11.002
Evenden, J. L., and Ryan, C. N. (1996). The pharmacology of impulsive behaviour in rats: the effects of drugs on response choice with varying delays of reinforcement. Psychopharmacology (Berl) 128, 161–170. doi: 10.1007/s002130050121
Evenden, J. L., and Ryan, C. N. (1999). The pharmacology of impulsive behaviour in rats VI: the effects of ethanol and selective serotonergic drugs on response choice with varying delays of reinforcement. Psychopharmacology (Berl) 146, 413–421. doi: 10.1007/pl00005486
Felsher, J. R., Derevensky, J. L., and Gupta, R. (2004). Lottery playing amongst youth: implications for prevention and social policy. J. Gambl. Stud. 20, 127–153. doi: 10.1023/b:jogs.0000022306.72513.7c
Fiore, V. G., Sperati, V., Mannella, F., Mirolli, M., Gurney, K., Friston, K., et al. (2014). Keep focussing: striatal dopamine multiple functions resolved in a single mechanism tested in a simulated humanoid robot. Front. Psychol. 5:124. doi: 10.3389/fpsyg.2014.00124
Fiorillo, C. D., Tobler, P. N., and Schultz, W. (2003). Discrete coding of reward probability and uncertainty by dopamine neurons. Science 299, 1898–1902. doi: 10.1126/science.1077349
Fragaszy, D. M., Visalberghi, E., and Fedigan, L. M. (2004). The Complete Capuchin: The Biology of Genus Cebus. Cambridge: Cambridge University Press.
Frank, M. J., Loughry, B., and O’Reilly, R. C. (2001). Interactions between the frontal cortex and basal ganglia in working memory: a computational model. Cogn. Affect. Behav. Neurosci. 1, 137–160. doi: 10.3758/cabn.1.2.137
Frank, M. J., Samanta, J., Moustafa, A. A., and Sherman, S. J. (2007a). Hold your horses: impulsivity, deep brain stimulation and medication in parkinsonism. Science 318, 1309–1312. doi: 10.1126/science.1146157
Frank, M. J., Santamaria, A., O’Reilly, R. C., and Willcutt, E. (2007b). Testing computational models of dopamine and noradrenaline dysfunction in attention deficit/hyperactivity disorder. Neuropsychopharmacology 32, 1583–1599. doi: 10.1038/sj.npp.1301278
Frank, M. J., Scheres, A., and Sherman, S. J. (2007c). Understanding decision-making deficits in neurological conditions: insights from models of natural action selection. Philos. Trans. R. Soc. Lond. B Biol. Sci. 362, 1641–1654. doi: 10.1098/rstb.2007.2058
Frank, M. J., Seeberger, L. C., and O’Reilly, R. C. (2004). By carrot or by stick: cognitive reinforcement learning in Parkinsonism. Science 306, 1940–1943. doi: 10.1126/science.1102941
Gigerenzer, G., and Todd, P. M. (1999). “Fast and frugal heuristics: the adaptive toolbox,” in Simple Heuristics That Make Us Smart, eds G. Gigerenzer and P. M. Todd (Oxford: Oxford University Press), 3–34.
Glimcher, P. (2011). Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. Proc. Natl. Acad. Sci. U S A 108(Suppl. 3), 15647–15654. doi: 10.1073/pnas.1014269108
Goldstein, S. J., and Richard, A. F. (1989). Ecology of rhesus macaques (Macaca mulatta) in Northwest Pakistan. Int. J. Primatol. 10, 531–567. doi: 10.1007/BF02739364
Graybiel, A. (2008). Habits, rituals and the evaluative brain. Annu. Rev. Neurosci. 31, 359–387. doi: 10.1146/annurev.neuro.29.051605.112851
Gutkin, B. S., Dehaene, S., and Changeux, J. P. (2006). A neurocomputational hypothesis for nicotine addiction. Proc. Natl. Acad. Sci. U S A 103, 1106–1111. doi: 10.1073/pnas.0510220103
Harrison, A. A., Everitt, B. J., and Robbins, T. W. (1997). Central 5-HT depletion enhances impulsive responding without affecting the accuracy of attentional performance: interactions with dopaminergic mechanisms. Psychopharmacology (Berl) 133, 329–342. doi: 10.1007/s002130050410
Haslam, M., Hernandez-Aguilar, A., Ling, V., Carvalho, S., de la Torre, I., DeStefano, A., et al. (2009). Primate archaeology. Nature 460, 339–344. doi: 10.1038/nature08188
Haun, D. B. M., Nawroth, C., and Call, J. (2011). Great apes’ risk taking strategies in a decision making task. PLoS One 6:e28801. doi: 10.1371/journal.pone.0028801
Hayden, B. Y., and Platt, M. L. (2007). Temporal discounting predicts risk sensitivity in rhesus macaques. Curr. Biol. 17, 49–53. doi: 10.1016/j.cub.2006.10.055
Hayden, B. Y., and Platt, M. L. (2009). Gambling for Gatorade: risk sensitive decision making for fluid reward in humans. Anim. Cogn. 12, 201–207. doi: 10.1007/s10071-008-0186-8
Hayden, B. Y., Heilbronner, S. R., and Platt, M. L. (2010). Ambiguity aversion in rhesus macaques. Front. Neurosci. 4:166. doi: 10.3389/fnins.2010.00166
Hayden, B. Y., Heilbronner, S. R., Nair, A. C., and Platt, M. L. (2008a). Cognitive influences on the risk-seeking by rhesus macaques. Judgm. Decis. Mak. 3, 389–395.
Hayden, B. Y., Nair, A. C., McCoy, A. N., and Platt, M. L. (2008b). Posterior cingulate cortex mediates outcome -contingent allocation of behavior. Neuron 60, 19–25. doi: 10.1016/j.neuron.2008.09.012
Heilbronner, S. R., and Hayden, B. Y. (2013). Contextual factors explain risk-seeking preferences in rhesus monkeys. Front. Neurosci. 7:7. doi: 10.3389/fnins.2013.00007
Heilbronner, S. R., Hayden, B. Y., and Platt, M. L. (2011). Decision salience signals in posterior cingulate cortex. Front. Neurosci. 5:55. doi: 10.3389/fnins.2011.00055
Heilbronner, S. R., Rosati, A. G., Stevens, J. R., Hare, B., and Hauser, M. D. (2008). A fruit in the hand or two in the bush? Divergent risk preferences in chimpanzees and bonobos. Biol. Lett. 4, 246–249. doi: 10.1098/rsbl.2008.0081
Heinz, A., Higley, J. D., Gorey, J. G., Saunders, R. C., Jones, D. W., Hommer, D., et al. (1998). In vivo association between alcohol intoxication, aggression, and serotonin transporter availability in nonhuman primates. Am. J. Psychiatry 155, 1023–1028.
Helbing, D., Farkas, I., and Vicsek, T. (2000). Simulating dynamical features of escape panic. Nature 407, 487–490. doi: 10.1038/35035023
Henrich, J., and McElreath, R. (2002). Are peasants risk-averse decision makers? Curr. Anthropol. 43, 172–181. doi: 10.1086/338291
Hill, S. E, and Buss, D. M. (2010). Risk and relative social rank: positional concerns and risky shifts in probabilistic decision-making. Evol. Hum. Behav. 31, 219–226. doi: 10.1016/j.evolhumbehav.2010.01.002
Hodgins, D. C., Stea, J. N., and Grant, J. E. (2011). Gambling disorders. Lancet 378, 1874–1884. doi: 10.1016/S0140-6736(10)62185-X
Hollander, E., Sood, E., Pallanti, S., Baldini-Rossi, N., and Baker, B. (2005). Pharmacological treatments of pathological gambling. J. Gambl. Stud. 21, 101–110. doi: 10.1007/s10899-004-1932-8
Huang, J. H., and Boyer, R. (2007). Epidemiology of youth gambling problems in Canada: a national prevalence study. Can. J. Psychiatry 52, 657–665.
Huys, Q. J. M. (2013). “Computational psychiatry,” in Encyclopaedia of Computational Neuroscience, eds D. Jaeger and R. Jung (Berlin: Springer).
Huys, Q. J. M., Pizzagalli, D. A., Bogdan, R., and Dayan, P. (2013). Mapping anhedonia onto reinforcement learning: a behavioural meta-analysis. Biol. Mood Anxiety Disord. 3:12 doi: 10.1186/2045-5380-3-12
Ibanez, A., Blanco, C., Perez de Castro, I., Fernandez-Piqueras, J., and Saiz-Ruiz, J. (2003). Genetics of pathological gambling. J. Gambl. Stud. 19, 11–22. doi: 10.1023/A:1021271029163
Jazaeri, S. A., and Habil, M. H. (2012). Reviewing two types of addiction - pathological gambling and substance use. Indian J. Psychol. Med. 34, 5–11.
Joukhador, J., Maccallum, F., and Blaszczynski, A. (2003). Differences in cognitive distortions between problem and social gamblers. Psychol. Rep. 92, 1203–1214. doi: 10.2466/pr0.92.3.1203-1214
Kacelnik, A., and Bateson, M. (1996). Risky theories: the effects of variance on foraging decisions. Am. Zool. 36, 402–434. doi: 10.1093/icb/36.4.402
Kacelnik, A., and Bateson, M. (1997). Risk sensitivity: cross-roads for theories of decision making. Trends Cogn. Sci. 1, 304–309. doi: 10.1016/s1364-6613(97)01093-0
Kahneman, D., and Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica 47, 263–292. doi: 10.2307/1914185
Kakade, S., and Dayan, P. (2002). Dopamine: generalization and bonuses. Neural Netw. 15, 549–559. doi: 10.1016/s0893-6080(02)00048-5
Kassinove, J. I., and Schare, M. L. (2001). Effects of the “near miss” and the “big win” on persistence at slot machine gambling. Psychol. Addict. Behav. 15, 155–158. doi: 10.1037/0893-164x.15.2.155
Kim, S. W., Grant, J. E., Adson, D. E., Shin, Y. C., and Zaninelli, R. (2002). A double-blind placebo-controlled study of the efficacy and safety of paroxetine in the treatment of pathological gambling. J. Clin. Psychiatry 63, 501–507. doi: 10.4088/jcp.v63n0606
Knott, C. D. (1999). “Orangutan behavior and ecology,” in The Nonhuman Primates, eds P. Dolhinow and A. Fuentes (Mountain View, CA: Mayfield Press), 50–57.
Koot, S., Adriani, W., Saso, L., van den Bos, R., and Laviola, G. (2009). Home-cage testing of delay discounting in rats. Behav. Res. Methods 41, 1169–1176. doi: 10.3758/brm.41.4.1169
Koot, S., Zoratto, F., Cassano, T., Colangeli, R., Laviola, G., van den Bos, R., et al. (2012). Compromised decision-making and increased gambling proneness following dietary serotonin depletion in rats. Neuropharmacology 62, 1640–1650. doi: 10.1016/j.neuroph002
Kumar, P., Waiter, G., Ahearn, T., Milders, M., Reid, I., and Steele, J. D. (2008). Abnormal temporal difference reward-learning signals in major depression. Brain 131, 2084–2093. doi: 10.1093/brain/awn136
Kuznar, L. A. (2001). Risk sensitivity and value among andean pastoralists: measures, models and empirical tests. Curr. Anthropol. 42, 432–440. doi: 10.1086/320483
Lakshminarayanan, V. R., Chen, M. K., and Santos, L. R. (2008). Endowment effect in capuchin monkeys. Philos. Trans. R. Soc. Lond. B Biol. Sci. 363, 3837–3844. doi: 10.1098/rstb.2008.0149
Lakshminarayanan, V. R., Chen, M. K., and Santos, L. R. (2011). The evolution of decision-making under risk: framing effects in monkey risk preferences. J. Exp. Soc. Psychol. 47, 689–693. doi: 10.1016/j.jesp.2010.12.011
Laviola, G., Macri, S., Morley-Fletcher, S., and Adriani, W. (2003). Risk-taking behavior in adolescent mice: psychobiological determinants and early epigenetic influence. Neurosci. Biobehav. Rev. 27, 19–31. doi: 10.1016/s0149-7634(03)00006-x
Lee, D. (2005). Neuroeconomics: making risky choices in the brain. Nat. Neurosci. 8, 1129–1130. doi: 10.1038/nn0905-1129
Lejuez, C. W., Read, J. P., Kahler, C. W., Richards, J. B., Ramsey, S. E., Stuart, G. L., et al. (2002). Evaluation of a behavioral measure of risk taking: the balloon analogue risk task (BART). J. Exp. Psychol. Appl. 8, 75–84. doi: 10.1037/1076-898x.8.2.75
Lesieur, H. R., and Blume, S. B. (1987). The south oaks gambling screen (SOGS): a new instrument for the identification of pathological gamblers. Am. J. Psychiatry 144, 1184–1188.
Leussis, M. P., and Andersen, S. L. (2008). Is adolescence a sensitive period for depression? behavioral and neuroanatomical findings from a social stress model. Synapse 62, 22–30. doi: 10.1002/syn.20462
Long, A. B., Kuhn, C. M., and Platt, M. L. (2009). Serotonin shapes risky decision making in monkeys. Soc. Cogn. Affect. Neurosci. 4, 346–356. doi: 10.1093/scan/nsp020
Lowengrub, K., Iancu, I., Aizer, A., Kotler, M., and Dannon, P. N. (2006). Pharmacotherapy of pathological gambling: review of new treatment modalities. Expert Rev. Neurother. 6, 1845–1851. doi: 10.1586/14737175.6.12.1845
Lynch Alfaro, J. W., Silva, J. D. Jr., and Rylands, A. B. (2012). How different are robust and gracile capuchin monkeys? An argument for the use of Sapajus and Cebus. Am. J. Primatol. 74, 273–286. doi: 10.1002/ajp.22007
MacLean, E. L., Mandalaywala, T. M., and Brannon, E. M. (2012). Variance-sensitive choice in lemurs: constancy trumps quantity. Anim. Cogn. 15, 15–25. doi: 10.1007/s10071-011-0425-2
Maia, T., and Frank, M. J. (2011). From reinforcement learning models to psychiatric and neurological disorders. Nat. Neurosci. 14, 154–162. doi: 10.1038/nn.2723
Mannella, F., Mirolli, M., and Baldassarre, G. (2007). “The role of Amygdala in devaluation: a model tested with a simulated robot,” in Proceedings of the Seventh International Conference on Epigenetic Robotics (EpiRob2007), eds L. Berthouze, G. P. Dhristiopher, M. Littman, H. Kozima and C. Balkenius (Lund: Lund University Cognitive Studies), 77–84.
Mannella, F., Mirolli, M., and Baldassarre, G. (2010). “The interplay of pavlovian and instrumental processes in devaluation experiments: a computational embodied neuroscience model tested with a simulated rat,” in Modelling Perception with Artificial Neural Networks, eds C. R. Tosh and G. D. Ruxton (Cambridge: Cambridge University Press), 93–113.
Mannella, F., Zappacosta, S., Mirolli, M., and Baldassarre, G. (2008). “A computational model of the amygdala nuclei’s role in second order conditioning,” in From Animals to Animats 10: Proceedings of the Tenth International Conference on the Simulation of Adaptive Behavior (SAB2008), eds M. Asada, J. C. T. Hallam, J.-A. Meyer and J. Tani (Berlin: Springer Verlag), 321–330.
Marsh, B., and Kacelnik, A. (2002). Framing effects and risky decisions in starlings. Proc. Natl. Acad. Sci. U S A 99, 3352–3355. doi: 10.1073/pnas.042491999
McClelland, J., and Rumelhart, D. (1989). Explorations in Parallel Distributed Processing: A Handbook of Models, Programs and Exercises. Cambridge: MIT Press.
McClure, S. M., Laibson, D. I., Loewenstein, G., and Cohen, J. D. (2004). Separate neural systems value immediate and delayed monetary rewards. Science 306, 503–507. doi: 10.1126/science.1100907
McCormack, A., Shorter, G. W., and Griffiths, M. D. (2012). An empirical study of gender differences in online gambling. J. Gambl. Stud. doi: 10.1007/s10899-012-9341-x. [Epub ahead of print].
McCoy, A. N., and Platt, M. L. (2005). Risk-sensitive neurons in macaque posterior cingulate cortex. Nat. Neurosci. 8, 1220–1227. doi: 10.1038/nn1523
McNamara, J. M. (1996). Risk-prone behaviour under rules which have evolved in a changing environment. Am. Zool. 36, 484–495. doi: 10.1093/icb/36.4.484
McNamara, J. M., Fawcett, T. W., and Houston, A. I. (2013). An adaptive response to uncertainty generates positive and negative contrast effects. Science 340, 1084–1086. doi: 10.1126/science.1230599
Meyer, G., Schwertfeger, J., Exton, M. S., Janssen, O. E., Knapp, W., Stadler, M. A., et al. (2004). Neuroendocrine response to casino gambling in problem gamblers. Psychoneuroendocrinology 29, 1272–1280. doi: 10.1016/j.psyneuen.2004.03.005
Mirolli, M., Mannella, F., and Baldassarre, G. (2010). The roles of the amygdala in the affective regulation of body, brain and behaviour. Connect. Sci. 22, 215–245. doi: 10.1080/09540091003682553
Mirolli, M., Santucci, V., and Baldassarre, G. (2013). Phasic dopamine as a prediction error signal of intrinsic and extrinsic reinforcements driving both action acquisition and reward maximization: a simulated robotic study. Neural Netw. 39, 40–51. doi: 10.1016/j.neunet.2012.12.012
Mobini, S., Body, S., Ho, M. Y., Bradshaw, C. M., Szabadi, E., Deakin, J. F., et al. (2002). Effects of lesions of the orbitofrontal cortex on sensitivity to delayed and probabilistic reinforcement. Psychopharmacology (Berl) 160, 290–298. doi: 10.1007/s00213-001-0983-0
Mobini, S., Chiang, T. J., Al-Ruwaitea, A. S., Ho, M. Y., Bradshaw, C. M., and Szabadi, E. (2000). Effect of central 5-hydroxytryptamine depletion on inter-temporal choice: a quantitative analysis. Psychopharmacology (Berl) 149, 313–318. doi: 10.1007/s002130000385
Montague, P. R., Dolan, R. J., Friston, K. J., and Dayan, P. (2012). Computational psychiatry. Trends Cogn. Sci. 16, 72–80. doi: 10.1016/j.tics.2011.11.018
Montague, P. R., Hyman, S. E., and Cohen, J. D. (2004). Computational roles for dopamine in behavioural control. Nature 431, 760–767. doi: 10.1038/nature03015
Moustafa, A. A., Cohen, M. X., Sherman, S. J., and Frank, M. J. (2008). A role for dopamine in temporal decision making and reward maximization in parkinsonism. J. Neurosci. 28, 12294–12304. doi: 10.1523/jneurosci.3116-08.2008
Murray, G. K., Corlett, P. R., Clark, L., Pessiglione, M., Blackwell, A. D., Honey, G., et al. (2008). Substantia nigra/ventral tegmental reward prediction error disruption in psychosis. Mol. Psychiatry 13, 267–276. doi: 10.1038/sj.mp.4002058
Niv, Y., Joel, D., Meilijson, I., and Ruppin, E. (2002). Evolution of reinforcement learning in uncertain environments: a simple explanation for complex foraging behaviors. Adapt. Behav. 10, 5–24. doi: 10.1177/10597123020101001
Nordin, C., and Eklundh, T. (1999). Altered CSF 5-HIAA disposition in pathologic male gamblers. CNS Spectr. 4, 25–33.
O’Neill, M., and Schultz, W. (2010). Coding of reward risk by orbitofrontal neurons is mostly distinct from coding of reward value. Neuron 68, 789–800. doi: 10.1016/j.neuron.2010.09.031
Panlilio, L. V., Thorndike, E. B., and Schindler, C. W. (2007). Blocking of conditioning to a cocaine-paired stimulus: testing the hypothesis that cocaine perpetually produces a signal of larger-than-expected reward. Pharmacol. Biochem. Behav. 86, 774–777. doi: 10.1016/j.pbb.2007.03.005
Petry, N. M., Stinson, F. S., and Grant, B. F. (2005). Comorbidity of DSM-IV pathological gambling and other psychiatric disorders: results from the National Epidemiologic Survey on alcohol and related conditions. J. Clin. Psychiatry 66, 564–574. doi: 10.4088/jcp.v66n0504
Potenza, M. N. (2001). The neurobiology of pathological gambling. Semin. Clin. Neuropsychiatry 6, 217–226. doi: 10.1053/scnp.2001.22929
Potenza, M. N. (2013). Neurobiology of gambling behaviours. Curr. Opin. Neurobiol. 23, 660–667. doi: 10.1016/j.conb.2013.03.004
Puumala, T., and Sirvio, J. (1998). Changes in activities of dopamine and serotonin systems in the frontal cortex underlie poor choice accuracy and impulsivity of rats in an attention task. Neuroscience 83, 489–499. doi: 10.1016/s0306-4522(97)00392-8
Real, L. A. (1991). Animal choice behavior and the evolution of cognitive architecture. Science 253, 980–986. doi: 10.1126/science.1887231
Redish, A. D. (2004). Addiction as a computational process gone awry. Science 306, 1944–1947. doi: 10.1126/science.1102384
Redish, A. D., Jensen, S., Johnson, A., and Kurth-Nelson, Z. (2007). Reconciling reinforcement learning models with behavioral extinction and renewal: implications for addiction, relapse, and problem gambling. Psychol. Rev. 114, 784–805. doi: 10.1037/0033-295X.114.3.784
Reuter, J., Raedler, T., Rose, M., Hand, I., Glascher, J., and Buchel, C. (2005). Pathological gambling is linked to reduced activation of the mesolimbic reward system. Nat. Neurosci. 8, 147–148. doi: 10.1038/nn1378
Richard, A. F., Goldstein, S. J., and Dewar, R. E. (1989). Weed macaques: the evolutionary implications of macaque feeding ecology. Int. J. Primatol. 10, 569–594. doi: 10.1007/bf02739365
Ridderinkhof, K. R., van den Wildenberg, W. P. M., Segalowitz, S. J., and Carter, C. S. (2004). Neurocognitive mechanisms of cognitive control: the role of prefrontal cortex in action selection, response inhibition, performance monitoring and reward-based learning. Brain Cogn. 56, 129–140. doi: 10.1016/j.bandc.2004.09.016
Rolls, E. T., Loh, M., Deco, G., and Winterer, G. (2008). Computational models of schizophrenia and dopamine modulation in the prefrontal cortex. Nat. Rev. Neurosci. 9, 696–709. doi: 10.1038/nrn2462
Rosati, A. G., and Hare, B. (2012). Decision making across social contexts: competition increases preferences for risk in chimpanzees and bonobos. Anim. Behav. 84, 869–879. doi: 10.1016/j.anbehav.2012.07.010
Rosati, A. G., and Hare, B. (2013). Chimpanzees and bonobos exhibit emotional responses to decision outcomes. PLoS One 8:e63058. doi: 10.1371/journal.pone.0063058
Rosati, A. G., and Stevens, J. R. (2009). “Rational decisions: the adaptive nature of context-dependent choice” in Rational Animals, Irrational Humans, eds S. Watanabe, A. P. Blaisdell, L. Huber and A. Young (Tokyo: Keio University Press), 101–117.
Ryan, R. M., and Deci, E. L. (2000). Intrinsic and extrinsic motivations: classic definitions and new directions. Contemp. Educ. Psychol. 25, 54–67. doi: 10.1006/ceps.1999.1020
Saglimbeni, F., and Parisi, D. (2011). “Input from the external environment and input from within the body,” in Advances in Artificial Life. Darwin meets von Neumann, Part I, eds G. Kampis, I. Karsai and E. Szathmáry (Berlin: Springer), 148–155.
Scheres, A., Dijkstra, M., Ainslie, E., Balkan, J., Reynolds, B., Sonuga-Barke, E., et al. (2006). Temporal and probabilistic discounting of rewards in children and adolescents: effects of age and ADHD symptoms. Neuropsychologia 44, 2092–2103. doi: 10.1016/j.neuropsychologia.2005.10.012
Schultz, W. (2006). Behavioral theories and the neurophysiology of reward. Annu. Rev. Psychol. 57, 87–115. doi: 10.1146/annurev.psych.56.091103.070229
Schultz, W., Dayan, P., and Montague, P. R. (1997). A neural substrate of prediction and reward. Science 275, 1593–1599. doi: 10.1126/science.275.5306.1593
Shaffer, H. J., and Korn, D. A. (2002). Gambling and related mental disorders: a public health analysis. Annu. Rev. Public Health 23, 171–212. doi: 10.1146/annurev.publhealth.23.100901.140532
Shafir, S., Waite, T. A., and Smith, B. H. (2002). Context-dependent violations of rational choice in honeybees (Apis mellifera) and gray jays (Perisoreus canadensis). Behav. Ecol. Sociobiol. 51, 180–187.
Shead, N. W., and Hodgins, D. C. (2009). Probability discounting of gains and losses: implications for risk attitudes and impulsivity. J. Exp. Anal. Behav. 92, 1–16. doi: 10.1901/jeab.2009.92-1
Silberberg, A., Parker, S., Allouch, C., Fabos, M., Hoberman, H., McDonald, L., et al. (2013). Human risky choice in a repeated-gambles procedure: an up-linkage replication of Lakshminarayanan, Chen and Santos (2011). Anim. Cogn. 16, 907–914. doi: 10.1007/s10071-013-0623-1
Simon, N. W., Gilbert, R. J., Mayse, J. D., Bizon, J. L., and Setlow, B. (2009). Balancing risk and reward: a rat model of risky decision making. Neuropsychopharmacology 34, 2208–2217. doi: 10.1038/npp.2009.48
Smith, A. J., Li, M., Becker, S., and Kapur, S. (2007). Linking animal models of psychosis to computational models of dopamine function. Neuropsychopharmacology 32, 54–66. doi: 10.1038/sj.npp.1301086
So, N. Y., and Stuphorn, V. (2010). Supplementary eye field encodes option and action value for saccades with variable reward. J. Neurophysiol. 104, 2634–2653. doi: 10.1152/jn.00430.2010
Spruijt, B. M., and de Visser, L. (2006). Advanced behavioural screening: automated home-cage ethology. Drug Discov. Today Technol. 3, 231–237. doi: 10.1016/j.ddtec.2006.06.010
Stephens, D. W., and Anderson, D. (2001). The adaptive value of preference for immediacy: when shortsighted rules have farsighted consequences. Behav. Ecol. 12, 330–339. doi: 10.1093/beheco/12.3.330
Stevens, J. R. (2010). “Rational decision making in primates: the bounded and the ecological,” in Primate Neuroethology, eds M. L. Platt and A. A. Ghazanfar (Oxford: Oxford University Press), 96–116.
Strong, D. R., Daughters, S. B., Lejuez, C. W., and Breen, R. B. (2004). Using the Rasch model to develop a revised Gambling Attitudes and Beliefs Scale (GABS) for use with male college student gamblers. Subst. Use Misuse 39, 1013–1024. doi: 10.1081/ja-120030897
Sugrue, L. P., Corrado, G. S., and Newsome, W. T. (2005). Choosing the greater of two goods: neural currencies for valuation and decision making. Nat. Rev. Neurosci. 6, 363–375. doi: 10.1038/nrn1666
Timberlake, W., and Peden, B. F. (1987). On the distinction between open and closed economies. J. Exp. Anal. Behav. 48, 35–60. doi: 10.1901/jeab.1987.48-35
Todd, P. M., and Gigerenzer, G. (2000). Précis of “Simple heuristics that make us smart”. Behav. Brain Sci. 23, 727–741.
Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi: 10.1126/science.7455683
Tversky, A., and Kahneman, D. (1992). Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertain. 5, 297–323. doi: 10.1007/bf00122574
Ungless, M. (2004). Dopamine: the salient issue. Trends Neurosci. 27, 702–706. doi: 10.1016/j.tins.2004.10.001
Van den Berg, C. L., Pijlman, F. T., Koning, H. A., Diergaarde, L., Van Ree, J. M., and Spruijt, B. M. (1999). Isolation changes the incentive value of sucrose and social behaviour in juvenile and adult rats. Behav. Brain Res. 106, 133–142. doi: 10.1016/s0166-4328(99)00099-6
van den Bos, R., Davies, W., Dellu-Hagedorn, F., Goudriaan, A. E., Granon, S., Homberg, J., et al. (2013). Cross-species approaches to pathological gambling: a review targeting sex differences, adolescent vulnerability and ecological validity of research tools. Neurosci. Biobehav. Rev. 37(10/2), 2454–2471. doi: 10.1016/j.neubiorev.2013.07.005
van den Bos, R., Lasthuis, W., den Heijer, E., van der Harst, J., and Spruijt, B. (2006). Toward a rodent model of the Iowa gambling task. Behav. Res. Methods 38, 470–478. doi: 10.3758/bf03192801
van der Staay, F. J. (2006). Animal models of behavioral dysfunctions: basic concepts and classifications and an evaluation strategy. Brain Res. Rev. 52, 131–159. doi: 10.1016/j.brainresrev.2006.01.006
von Neumann, J., and Morgenstern, O. (1947). Theory of Games and Economic Behaviour. Princeton: Princeton University Press.
Voon, V., Thomsen, T., Miyasaki, J. M., de Souza, M., Shafro, A., Fox, S. H., et al. (2007). Factors associated with dopaminergic drug-related pathological gambling in Parkinson’s disease. Arch. Neurol. 64, 212–216. doi: 10.1001/archneur.64.2.212
Wahlsten, D., Metten, P., Phillips, T. J., Boehm, S. L. 2nd., Burkhart-Kasch, S., Dorow, J., et al. (2003). Different data from different labs: lessons from studies of gene-environment interaction. J. Neurobiol. 54, 283–311. doi: 10.1002/neu.10173
Waite, T. A. (2001). Background context and decision making in hoarding gray jays. Behav. Ecol. 12, 318–324. doi: 10.1093/beheco/12.3.318
Waltz, J. A., Frank, M. J., Robinson, B. M., and Gold, J. M. (2007). Selective reinforcement learning deficits in schizophrenia support predictions from computational models of striatal-cortical dysfunction. Biol. Psychiatry 62, 756–764. doi: 10.1016/j.biopsych.2006.09.042
Watson, K. K., Ghodasra, J. H., and Platt, M. L. (2009). Serotonin transporter genotype modulates social reward and punishment in rhesus macaques. PLoS One 4:e4156. doi: 10.1371/journal.pone.0004156
Wiebe, J. M., Cox, B. J., and Mehmel, B. G. (2000). The south oaks gambling screen revised for adolescents (SOGS-RA): further psychometric findings from a community sample. J. Gambl. Stud. 16, 275–288. doi: 10.1023/A:1009489132628
Wilber, M. K., and Potenza, M. N. (2006). Adolescent gambling. Research and clinical implications. Pyschiatry 3, 40–48.
Wilhelm, C. J., and Mitchell, S. H. (2008). Rats bred for high alcohol drinking are more sensitive to delayed and probabilistic outcomes. Genes Brain Behav. 7, 705–713. doi: 10.1111/j.1601-183x.2008.00406.x
Winstanley, C. A., Cocker, P. J., and Rogers, R. D. (2011). Dopamine modulates reward expectancy during performance of a slot machine task in rats: evidence for a ‘near-miss’ effect. Neuropsychopharmacology 36, 913–925. doi: 10.1038/npp.2010.230
Wise, R. (2004). Dopamine, learning and motivation. Nat. Rev. Neurosci. 5, 483–494. doi: 10.1038/nrn1406
Wogar, M. A., Bradshaw, C. M., and Szabadi, E. (1993). Effect of lesions of the ascending 5-hydroxytryptaminergic pathways on choice between delayed reinforcers. Psychopharmacology (Berl) 111, 239–243. doi: 10.1007/bf02245530
Wrangham, R. W., and Peterson, D. (1996). Demonic Males Apes and the Origins of Human Violence. Cambridge: Harvard University Press.
Wrangham, R. W., and Pilbeam, D. (2001). “African apes as time machines,” in All Apes Great and Small, eds B. M. F. Galdikas, N. E. Briggs, L. K. Sheeran, G. L. Shapiro and J. Goodall (New York: Kluwer Academic/Plenum), 5–17.
Wright, P. C. (1999). Lemur traits and Madagascar ecology: coping with an island environment. Am. J. Phys. Anthropol. 110(Suppl. 29), 31–72. doi: 10.1002/(sici)1096-8644(1999)110:29+<31::aid-ajpa3>3.0.co;2-0
Yamada, H., Tymula, A., Louie, K., and Glimcher, P. W. (2013). Thirst-dependent risk preferences in monkeys identify a primitive form of wealth. Proc. Natl. Acad. Sci. U S A 110, 15788–15793. doi: 10.1073/pnas.1308718110
Young, M. M., and Wohl, M. J. (2011). The Canadian problem gambling index: an evaluation of the scale and its accompanying profiler software in a clinical setting. J. Gambl. Stud. 27, 467–485. doi: 10.1007/s10899-010-9224-y
Zeeb, F. D., Robbins, T. W., and Winstanley, C. A. (2009). Serotonergic and dopaminergic modulation of gambling behavior as assessed using a novel rat gambling task. Neuropsychopharmacology 34, 2329–2343. doi: 10.1038/npp.2009.62
Zoratto, F., Laviola, G., and Adriani, W. (2012). Choice with delayed or uncertain reinforcers in rats: influence of timeout duration and session length. Synapse 66, 792–806. doi: 10.1002/syn.21570
Keywords: pathological gambling, risk sensitivity, uncertain reward, animal models, nonhuman primates, neurocomputational models, evolutionary models
Citation: Paglieri F, Addessi E, De Petrillo F, Laviola G, Mirolli M, Parisi D, Petrosino G, Ventricelli M, Zoratto F and Adriani W (2014) Nonhuman gamblers: lessons from rodents, primates, and robots. Front. Behav. Neurosci. 8:33. doi: 10.3389/fnbeh.2014.00033
Received: 30 November 2013; Paper pending published: 07 January 2014;
Accepted: 22 January 2014; Published online: 11 February 2014.
Edited by:
Patrick Anselme, University of Liège, BelgiumReviewed by:
Francesca Cirulli, Istituto Superiore di Sanità, ItalyAlicia Izquierdo, University of California, Los Angeles, USA
Copyright © 2014 Paglieri, Addessi, De Petrillo, Laviola, Mirolli, Parisi, Petrosino, Ventricelli, Zoratto and Adriani. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Fabio Paglieri, Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche (ISTC-CNR), Goal-Oriented Agents Lab (GOAL), Via S. Martino della Battaglia 44, 00185 Rome, Italy e-mail: fabio.paglieri@istc.cnr.it
Walter Adriani, Department of Cell Biology and Neurosciences, Istituto Superiore di Sanità, Viale Regina Elena 299, 00185 Rome, Italy e-mail: walter.adriani@iss.it