Detail publikačního výsledku

Strategies for Training Large Scale Neural Network Language Models

MIKOLOV, T.; DEORAS, A.; POVEY, D.; BURGET, L.; ČERNOCKÝ, J.

Originální název

Strategies for Training Large Scale Neural Network Language Models

Anglický název

Strategies for Training Large Scale Neural Network Language Models

Druh

Stať ve sborníku mimo WoS a Scopus

Originální abstrakt

Techniques for effective training of recurrent neural network based language models are described, and new state-of-the-art results on standard speech recognition task are reported.

Anglický abstrakt

Techniques for effective training of recurrent neural network based language models are described, and new state-of-the-art results on standard speech recognition task are reported.

Klíčová slova

recurrent neural network, language model, speech recognition, maximum entropy

Klíčová slova v angličtině

recurrent neural network, language model, speech recognition, maximum entropy

Autoři

MIKOLOV, T.; DEORAS, A.; POVEY, D.; BURGET, L.; ČERNOCKÝ, J.

Rok RIV

2012

Vydáno

11.12.2011

Nakladatel

IEEE Signal Processing Society

Místo

Hilton Waikoloa Village, Big Island, Hawaii

ISBN

978-1-4673-0366-8

Kniha

Proceedings of ASRU 2011

Strany od

196

Strany do

201

Strany počet

6

URL

BibTex

@inproceedings{BUT76453,
  author="Tomáš {Mikolov} and Anoop {Deoras} and Daniel {Povey} and Lukáš {Burget} and Jan {Černocký}",
  title="Strategies for Training Large Scale Neural Network Language Models",
  booktitle="Proceedings of ASRU 2011",
  year="2011",
  pages="196--201",
  publisher="IEEE Signal Processing Society",
  address="Hilton Waikoloa Village, Big Island, Hawaii",
  isbn="978-1-4673-0366-8",
  url="http://www.fit.vutbr.cz/research/groups/speech/publi/2011/mikolov_asru2011_00196.pdf"
}