Utilize este identificador para referenciar este registo: https://hdl.handle.net/1822/68633

Registo completo
Campo DCValorIdioma
dc.contributor.authorMeneghini, Ivan Reinaldopor
dc.contributor.authorAlves, Marcos Antoniopor
dc.contributor.authorGaspar-Cunha, A.por
dc.contributor.authorGuimarães, Frederico Gadelhapor
dc.date.accessioned2020-12-21T10:33:41Z-
dc.date.available2020-12-21T10:33:41Z-
dc.date.issued2020-
dc.identifier.issn1568-4946por
dc.identifier.urihttps://hdl.handle.net/1822/68633-
dc.description.abstractSolving many-objective problems (MaOPs) is still a significant challenge in the multi-objective optimization (MOO) field. One way to measure algorithm performance is through the use of benchmark functions (also called test functions or test suites), which are artificial problems with a well-defined mathematical formulation, known solutions and a variety of features and difficulties. In this paper we propose a parameterized generator of scalable and customizable benchmark problems for MaOPs. It is able to generate problems that reproduce features present in other benchmarks and also problems with some new features. We propose here the concept of generative benchmarking, in which one can generate an infinite number of MOO problems, by varying parameters that control specific features that the problem should have: scalability in the number of variables and objectives, bias, deceptiveness, multimodality, robust and non-robust solutions, shape of the Pareto front, and constraints. The proposed Generalized Position-Distance (GPD) tunable benchmark generator uses the position-distance paradigm, a basic approach to building test functions, used in other benchmarks such as Deb, Thiele, Laumanns and Zitzler (DTLZ), Walking Fish Group (WFG) and others. It includes scalable problems in any number of variables and objectives and it presents Pareto fronts with different characteristics. The resulting functions are easy to understand and visualize, easy to implement, fast to compute and their Pareto optimal solutions are known.por
dc.description.sponsorshipThis work has been supported by the Brazilian agencies (i) National Council for Scientific and Technological Development (CNPq); (ii) Coordination for the Improvement of Higher Education (CAPES) and (iii) Foundation for Research of the State of Minas Gerais (FAPEMIG, in Portuguese).por
dc.language.isoengpor
dc.publisherElsevier 1por
dc.rightsopenAccesspor
dc.subjectBenchmark functionspor
dc.subjectScalable test functionspor
dc.subjectMany-objective optimizationpor
dc.subjectEvolutionary algorithmspor
dc.titleScalable and customizable benchmark problems for many-objective optimizationpor
dc.typearticle-
dc.peerreviewedyespor
dc.relation.publisherversionhttps://www.sciencedirect.com/science/article/pii/S156849462030079Xpor
oaire.citationVolume90por
dc.identifier.doi10.1016/j.asoc.2020.106139por
dc.subject.fosCiências Naturais::Ciências da Computação e da Informaçãopor
dc.subject.wosScience & Technologypor
sdum.journalApplied Soft Computingpor
Aparece nas coleções:IPC - Artigos em revistas científicas internacionais com arbitragem

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
1-s2.0-S156849462030079X-main.pdf3,01 MBAdobe PDFVer/Abrir

Partilhe no FacebookPartilhe no TwitterPartilhe no DeliciousPartilhe no LinkedInPartilhe no DiggAdicionar ao Google BookmarksPartilhe no MySpacePartilhe no Orkut
Exporte no formato BibTex mendeley Exporte no formato Endnote Adicione ao seu ORCID