Utilize este identificador para referenciar este registo: https://hdl.handle.net/1822/74401

Registo completo
Campo DCValorIdioma
dc.contributor.authorReis, Joãopor
dc.contributor.authorPhan, Truong Khoapor
dc.contributor.authorKheirkhah, Mortezapor
dc.contributor.authorYang, Fanpor
dc.contributor.authorGriffin, Davidpor
dc.contributor.authorRocha, Miguelpor
dc.contributor.authorRio, Miguelpor
dc.date.accessioned2021-10-18T14:08:51Z-
dc.date.issued2021-09-20-
dc.identifier.citationReis, João; Phan, Truong Khoa; Kheirkhah, Morteza; Yang, Fan; Griffin, David; Rocha, Miguel; Rio, Miguel, R2L: Routing with Reinforcement Learning. IJCNN 2021 - International Joint Conference on Neural Networks. Shenzhen, China, July 18-22, IEEE, 1-7, 2021.por
dc.identifier.isbn978-0-7381-3366-9por
dc.identifier.issn2161-4407-
dc.identifier.urihttps://hdl.handle.net/1822/74401-
dc.description.abstractIn a packet network, the routes taken by traffic can be determined according to predefined objectives. Assuming that the network conditions remain static and the defined objectives do not change, mathematical tools such as linear programming could be used to solve this routing problem. However, networks can be dynamic or the routing requirements may change. In that context, Reinforcement Learning (RL), which can learn to adapt in dynamic conditions and offers flexibility of behavior through the reward function, presents as a suitable tool to find good routing strategies. In this work, we train an RL agent, which we call R2L, to address the routing problem. The policy function used in R2L is a neural network and we use an evolution strategy algorithm to determine its weights and biases. We tested R2L in two different scenarios: static and dynamic networks conditions. In the first, we used a 16-node network and experimented with different reward functions, observing that R2L was able to adapt its routing behavior accordingly. Finally, in the second experiment, we used a 5-node network topology where a given link's transmission rate changed during the simulation. In this scenario, we observed that R2L was able to deliver a competitive performance, compared to heuristic benchmarks, with changing network conditions.por
dc.language.isoengpor
dc.publisherIEEEpor
dc.rightsrestrictedAccesspor
dc.titleR2L: routing with reinforcement learningpor
dc.typeconferencePaperpor
dc.peerreviewedyespor
dc.relation.publisherversionhttps://www.ijcnn.org/por
dc.commentsCEB54947por
oaire.citationConferenceDate18 - 22 July 2021por
sdum.event.title2021 International Joint Conference on Neural Networks (IJCNN)por
sdum.event.typeconferencepor
oaire.citationStartPage1por
oaire.citationEndPage7por
oaire.citationConferencePlaceShenzhen, Chinapor
oaire.citationVolume2021-Julypor
dc.date.updated2021-10-18T13:52:10Z-
dc.identifier.doi10.1109/IJCNN52387.2021.9533549por
dc.date.embargo10000-01-01-
dc.description.publicationversioninfo:eu-repo/semantics/publishedVersion-
dc.subject.wosScience & Technologypor
sdum.journalIEEE International Joint Conference on Neural Networks (IJCNN)por
sdum.conferencePublicationIJCNN 2021 - International Joint Conference on Neural Networkspor
Aparece nas coleções:CEB - Artigos em Livros de Atas / Papers in Proceedings

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
document_54947_1.pdf
Acesso restrito!
1,58 MBAdobe PDFVer/Abrir

Partilhe no FacebookPartilhe no TwitterPartilhe no DeliciousPartilhe no LinkedInPartilhe no DiggAdicionar ao Google BookmarksPartilhe no MySpacePartilhe no Orkut
Exporte no formato BibTex mendeley Exporte no formato Endnote Adicione ao seu ORCID