iccv2019论文全集8497-disentangled-behavioural-_第1页
iccv2019论文全集8497-disentangled-behavioural-_第2页
iccv2019论文全集8497-disentangled-behavioural-_第3页
iccv2019论文全集8497-disentangled-behavioural-_第4页
iccv2019论文全集8497-disentangled-behavioural-_第5页
已阅读5页,还剩5页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、Disentangled behavioral representations Amir Dezfouli1Hassan Ashtiani2Omar Ghattas13 Richard Nock145Peter Dayan6#Cheng Soon Ong14 1Data61, CSIRO2McMaster University3University of Chicago 4Australian National University5University of Sydney6Max Planck Institute Corresponding authors amir.dezfoulidata

2、61.csiro.au#dayantue.mpg.de Abstract Individual characteristics in human decision-making are often quantifi ed by fi tting a parametric cognitive model to subjects behavior and then studying differences between them in the associated parameter space. However, these models often fi t behavior more po

3、orly than recurrent neural networks (RNNs), which are more fl ex- ible and make fewer assumptions about the underlying decision-making processes. Unfortunately, the parameter and latent activity spaces of RNNs are generally high- dimensional and uninterpretable, making it hard to use them to study i

4、ndividual differences. Here, we show how to benefi t from the fl exibility of RNNs while representing individual differences in a low-dimensional and interpretable space. To achieve this, we propose a novel end-to-end learning framework in which an encoder is trained to map the behavior of subjects

5、into a low-dimensional latent space. These low-dimensional representations are used to generate the parameters of individual RNNs corresponding to the decision-making process of each subject. We introduce terms into the loss function that ensure that the latent dimensions are informative and disenta

6、ngled, i.e., encouraged to have distinct effects on behav- ior. This allows them to align with separate facets of individual differences. We illustrate the performance of our framework on synthetic data as well as a dataset including the behavior of patients with psychiatric disorders. 1Introduction

7、 There is substantial commonality among humans (and other animals) in the way that they learn from experience in order to make decisions. However, there is often also considerable variability in the choices of different subjects in the same task Carroll and Maxwell, 1979. Such variability is rooted

8、in the structure of the underlying processes; for example, subjects can differ in their tendencies to explore new actions e.g., Frank et al., 2009 or in the weights they give to past experiences e.g., den Ouden et al., 2013. If meaningfully disentangled, these factors would crisply characterise the

9、decision-making processes of the subjects, and would provide a low-dimensional latent space that could be used for many other tasks including studying the behavioral heterogeneity of subjects endowed with the same psychiatric labels. However, extracting such representations from behavioral data is c

10、hallenging, as choices emerge from a complex set of interactions between latent variables and past experiences, making disentanglement diffi cult. One promising approach proposed for learning low-dimensional representations of behavioral data is through the use of cognitive modelling e.g., Navarro e

11、t al., 2006, Busemeyer and Stout, 2002; for example using a reinforcement learning framework e.g., Daw, 2011. In this approach, a parametrised computational model is assumed to underlie the decision-making process, and the parameters of this model such as the tendency to explore and the learning rat

12、e are found by fi tting each subjects 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. RNN z1 z2 LDIS xn t?1 an t?1 rn t?1 xn t vn t n t(.) Softmax Wn Learning networkEncoderDecoderInput sequences Latent space an 1.a n Tn rn 1.r n Tn rN 1 .rN TN aN 1 .aN TN

13、 a1 1.a 1 T1 r1 1.r 1 T1 LSEPLRE Figure 1: The model comprises an encoder (shown in red) and decoder (blue). The encoder is the composition of an RNN and a series of fully-connected layers. The encoder maps each whole input sequence (in the rectangles on the left) into a point in the latent space (d

14、epicted by(z1,z2)-coordinates in the middle) based on the fi nal state of the RNN. The latent representation for each input sequence is in turn fed into the decoder (shown in blue). The decoder generates the weights of an RNN (called the learning network here) and is shown by the dotted lines on the

15、 right side. The learning network is the reconstruction of the closed-loop dynamical process of the subject which generated the corresponding input sequence based on experience. This takes as inputs the previous reward,rn t?1, previous action, an t?1, and its previous state,xnt?1, and outputs its ne

16、xt state (xnt). xn t is then multiplied by matrix Wnto generate unnormalised probabilities (logits)vn t for taking each action in the next trial, which are converted (through a softmax) to actual probabilities,n t(.). The negative-log-likelihoods of the true actions are used to defi ne the reconstru

17、ction loss,LRE, which along withLDISandLSEPare used to train the model. TheLDISterm induces disentanglement at the group level, andLSEPseparates the effects each dimension of the latent space on the output of the learning network. choices. Their individual parameters are treated as the latent repres

18、entations of each subject. This approach has been successful at identifying differences between subjects in various conditions e.g., Yechiam et al., 2005; however, it is constrained by its reliance on the availability of a suitable parametric model that can capture behavior and behavioral variabilit

19、y across subjects. In practice, this often leads to manually designing and comparing a limited number of alternative models which may not fi t the behavioral data closely. An alternative class of computational models of human decision-making involves Recurrent Neural Networks (RNNs) e.g., Dezfouli e

20、t al., 2019, 2018, Yang et al., 2019. RNNs can model a wide range of dynamical systems Siegelmann and Sontag, 1995 including human decision-making processes, and make fewer restrictive assumptions about the underlying dynamical processes. However, unlike cognitive models, RNNs typically: (i) have la

21、rge numbers of parameters, which (ii) are hard to interpret. This renders RNNs impractical for studying and modelling individual differences. Here we develop a novel approach which benefi ts from the fl exibility of RNNs, while representing individual differences in a low-dimensional and interpretab

22、le space. For the former, we use an autoencoder framework Rumelhart et al., 1985, Tolstikhin et al., 2017 in which we take the behaviors of a set of subjects as input and automatically build a low-dimensional latent space which quantifi es aspects of individual differences along different latent dim

23、ensions (Figure 1). As in an hyper-networks Ha et al., 2016, Karaletsos et al., 2018, the coordinates of each subject within this latent space are then used to generate the parameters of an RNN which models the decision-making processes of that subject. To address interpretability, we introduce a no

24、vel contribution to the autoencoders loss function which encourages the different dimensions of the latent space to have separate effects on predicted behavior. This allows them to be interpreted independently. We show that this model is able to learn and extract low-dimensional representations from

25、 synthetically-generated behavioral data in which we know the ground truth, and we then apply it to experimental data. 2The model Data.The dataD DnN n=1compriseNinput sequences, in which each sequencen 2 1.N consists of the choices of a subject on a sequential decision-making task. In input sequence

26、n 2 2 1,.,Nthe subject performsTntrials,Dn (an t,rnt) Tn t=1, with action an t 2 Con trialt chosen from a set C CkK k=1, and reward r n t 2 . Encoder and decoder.We treat decision-making as a (partly stochastic) dynamical system that maps past experiences as input into outputs in the form of actions

27、. As such, the datasetDmay be considered to contain samples from the output of potentiallyNdifferent dynamical systems. The aim is then to turn each sequenceDninto a vector,zn, in a latent space, in such a way thatzncaptures the characteristic properties of the corresponding dynamical system (whilst

28、 avoiding over-fi tting the particular choices). In this respect, the task is to fi nd a low-dimensional representation for each of the dynamical systems in an unsupervised manner. We take an autoencoder-inspired model to achieve this, in which an encoder network is trained to process an entire inpu

29、t sequence into a vector in the latent space. A decoder network then takes as input this vector and recovers an approximation to the original dynamical system. Along with additional factors that we describe below, the model is trained by minimising a reconstruction loss which measures how well the g

30、enerated dynamical system can predict the observed sequence of actions. The latent representation is thus supposed to capture the “essence” of the input sequence. This is because the latent space is low-dimensional compared to the original sequence and acts as an information bottleneck; as such the

31、encoder has to learn to encode the most informative aspects of each sequence. The model architecture is presented in Figure 1. The fi rst part is an encoder RNN (shown in red), and is responsible for extracting the characteristic properties of the input sequence. It takes the whole sequence as input

32、 and outputs its terminal state. This state is mapped into the latent space through a series of fully-connected feed-forward layers. The encoder is: zn M1 enc(a n 1.T,r n 1.T;enc), n = 1.N,(1) in which encare the weights of the RNN and feed-forward layers, and M is the dimensionality of the latent s

33、pace (see Supplementary Material for the details of the architecture). The second part of the model is a feed-forward decoder network (shown in blue) with weightsdec, which takes the latent representation as input, and outputs a vector ?n, ?n dec(zn;dec).(2) As in a hyper-network, vector?ncontains t

34、he weights of a second RNN called the learning network, itself inspired by e.g., Dezfouli et al., 2019, 2018, and described below. The learning network apes the process of decision-making, taking past actions and rewards as input, and returning predictions of the probability of the next action. Maki

35、ng?nsuch that the learning network reconstructs the original sequence is what forces znto encode the characteristics of each subject. Learning network.The learning network is based on the Gated Recurrent Unit architecture GRU; Cho et al., 2014 withNccells. This realizes a functionfn, which at time-s

36、teptmaps the previous state xn t?1, action ant?1and reward rnt?1, into a next state, xn t fn(an t?1,r n t?1,x n t?1;? n), (3) with predictions for the probabilities of actions arising from a weight matrix Wn2 0favor repeating an action in next trial (perseveration) and values 0favor switching betwee

37、n the actions. We generatedN = 1500agents (saving30%for testing). The test data was used for determining the optimal number of training iterations (early stopping). Each agent selected 150 actions (Tn= 150). We trained our model using the data (see Figure S3 for the trajectory of the loss function a

38、nd Figure S4 for the distribution of the latent representations). As shown in Figure 2(a), and as intended, these representations turned out to have an interpretable relationship with the parameters that actually determined behavior: the exploration parameter?is mainly related toz2and the perseverat

39、ion parametertoz1. This outcome arose because of theLSEPterm in the loss function. A similar graph before introducing theLSEPis shown in Figure S1(a), which shows that without this term,z1andz2 have mixed relationships with and ?. We then sought to interpret each dimension of the latent space in behavioral terms. To do this, we used the decoder to generate learning networks corresponding to different values of the latent representations and interpreted these dimensions by analysing the behaviour of t

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论