Black-box nonlinear observer-based deep reinforcement learning controller with application on Floating Wind Turbines

Document Type : Article

Authors

1 Faculty of Mechanical Engineering, University of Tabriz, East Azerbaijan

2 Faculty of Electrical and Computer Engineering, University of Tabriz, East Azerbaijan

10.24200/sci.2024.63823.8614

Abstract

The developments in ocean energy have prompted researchers to investigate the floating offshore wind turbines (FOWTs). Therefore, the need to stabilize this structure is a crucial aspect in control engineering. The presence of disturbances and noise highlights the importance of implementing an intelligent control approach. This paper focuses on the nonlinear FOWT with an online feedback control system utilizing deep reinforcement learning (DRL) algorithms. The inherent characteristics of DRL allow the FOWT to adapt to changing environments, employing two parallel networks known as online-target. An observer system is integrated with direct gain based on the measured outputs from available sensors, demonstrating global asymptotic stability through a Lyapunov function. Furthermore, an agent trained using DQN in the adapted environment requires minimal instances to determine the optimal control policy. Simulation tests conducted in MATLAB exhibit the superior performance of the proposed observer-controller compared to the LQR approach in terms of FOWT stabilization. Additionally, it is shown that the Luenberger observer doesn’t perform as effectively as the newly developed observer in presence of uncertainty, unknown disturbances. Finally, the outcomes are compared with the gain scheduling PI control method recommended by Jonkman as a well-known benchmark to validate the accuracy of the simulation results.

Keywords

Main Subjects