A New Adaptive Type-II Fuzzy-Based Deep Reinforcement Learning Control: Fuel Cell Air-Feed Sensors Control

Publikation: Bidrag til tidsskrift/Konferencebidrag i tidsskrift /Bidrag til avisTidsskriftartikelForskningpeer review

DOI

This paper proposes a new adaptive controller for air-feed on Proton Exchange Membrane fuel cell (PEMFC) plants. The oxygen excess ratio is first regulated by a single input Interval type-2 fuzzy PI (SIT2-FPI) controller during the current variation of the PEMFC system. Then, a deep deterministic policy gradient (DDPG) algorithm is adopted to adaptively adjust the baseline PI coefficients of the SIT2 controller by taking the benefits of the online learning and model-free features of reinforcement learning. In the DDPG structure, an actor-network determines the policy signals, while a critic-network evaluates the quality of the policy provided by the actor. The suggested DDPG algorithm naturally takes into account the baseline PI coefficients into the design objective and offers the SIT2-FPI structure with online coefficient adjusting ability through learning. Based on reward feedback of the oxygen excess ratio error, the weights of the actor and critic networks are updated by the gradient descent technique. Detailed real-time model-in-the loop (MIL) simulation outcomes and comparative analysis are presented to confirm the adaptation capability of the DDPG based online SIT2-FPI coefficient tuning strategy.
OriginalsprogEngelsk
TidsskriftIEEE Sensors Journal
Vol/bind19
Nummer20
Sider (fra-til)9081-9089
Antal sider9
ISSN1530-437X
DOI
StatusUdgivet - 2019

Se relationer på Aarhus Universitet Citationsformater

ID: 158151965