Skip to content

Commit 215804e

Browse files
committed
Add Citation to README
1 parent aab3db5 commit 215804e

File tree

1 file changed

+17
-0
lines changed

1 file changed

+17
-0
lines changed

README.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ If you have any questions about our code or model, don't hesitate to contact us
3131
* [Monitor training](#monitor-training)
3232
* [Restart training](#restart-training)
3333
* [Evaluate models on VQA](#evaluate-models-on-vqa)
34+
* [Citation](#citation)
3435
* [Acknowledgment](#acknowledgment)
3536

3637
## Introduction
@@ -322,6 +323,22 @@ Evaluate the model from the best checkpoint. If your model has been trained on t
322323
python train.py --vqa_trainsplit train --path_opt options/vqa/mutan_att.yaml --dir_logs logs/vqa/mutan_att --resume best -e
323324
```
324325

326+
## Citation
327+
328+
Please cite the arXiv paper if you use Mutan in your work:
329+
330+
```
331+
@article{benyounescadene2017mutan,
332+
title={MUTAN: Multimodal Tucker Fusion for Visual Question Answering},
333+
author={Hedi Ben-Younes and
334+
R{\'{e}}mi Cad{\`{e}}ne and
335+
Nicolas Thome and
336+
Matthieu Cord}},
337+
journal={arXiv preprint arXiv:1705.06676},
338+
year={2017}
339+
}
340+
```
341+
325342
## Acknowledgment
326343

327344
Special thanks to the authors of [MLB](https://arxiv.org/abs/1610.04325) for providing some [Torch7 code](https://github.com/jnhwkim/MulLowBiVQA), [MCB](https://arxiv.org/abs/1606.01847) for providing some [Caffe code](https://github.com/akirafukui/vqa-mcb), and our professors and friends from LIP6 for the perfect working atmosphere.

0 commit comments

Comments
 (0)