<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts | Yidan's Homepage</title><link>https://lu-yidan.netlify.app/news/</link><atom:link href="https://lu-yidan.netlify.app/news/index.xml" rel="self" type="application/rss+xml"/><description>Posts</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><item><title>Learning Quadrupedal Locomotion over Challenging Terrain</title><link>https://lu-yidan.netlify.app/news/arclab/</link><pubDate>Sun, 31 Dec 2023 00:00:00 +0000</pubDate><guid>https://lu-yidan.netlify.app/news/arclab/</guid><description>&lt;h2 id="moral-learning-morphologically-adaptive-locomotion-controller-for-quadrupedal-robots-on-challenging-terrains">MorAL: Learning Morphologically Adaptive Locomotion Controller for Quadrupedal Robots on Challenging Terrains&lt;/h2>
&lt;p>&lt;strong>abtract&lt;/strong>
MorAL is a learning-based control framework adaptive to various quadruped robot morphologies and challenging terrains. It trains a control policy and an adaptive module simultaneously, considering the robot&amp;rsquo;s temporal states. This adaptive module allows the control policy to identify different robots&amp;rsquo; properties and estimate body velocity online. Extensive real-world and simulation tests show that MorAL enables robots with diverse morphologies to navigate various harsh indoor and outdoor terrains effectively.&lt;/p>
&lt;p>&lt;a href="https://www.youtube.com/watch?v=EjR2OkiLzTA&amp;amp;t=67s" target="_blank" rel="noopener">Video&lt;/a>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="image_moral" srcset="
/news/arclab/image_moral_hu183541e3125a07972ab7d9ba6394d942_1309864_21425f94afeaa38a61bc0cdb4a35bfbd.webp 400w,
/news/arclab/image_moral_hu183541e3125a07972ab7d9ba6394d942_1309864_93b6d861b3b86969cd3bc17d7c9f4de0.webp 760w,
/news/arclab/image_moral_hu183541e3125a07972ab7d9ba6394d942_1309864_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="https://lu-yidan.netlify.app/news/arclab/image_moral_hu183541e3125a07972ab7d9ba6394d942_1309864_21425f94afeaa38a61bc0cdb4a35bfbd.webp"
width="760"
height="510"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p></description></item><item><title>I borrowed an Unitree go1 robot! I'm implenting "learing for leged locomotion"!</title><link>https://lu-yidan.netlify.app/news/unitree/</link><pubDate>Mon, 17 Oct 2022 00:00:00 +0000</pubDate><guid>https://lu-yidan.netlify.app/news/unitree/</guid><description>&lt;h2 id="model-based-optimization-based-control">Model-based Optimization-Based Control&lt;/h2>
&lt;p>&lt;strong>Exp 1: Gait and Trajectory Optimization for Legged Systems Through Phase-Based End-Effector Parameterization&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Planning:
foot flight time
foot landing position&lt;/li>
&lt;li>Constraints:
kinematics constraints
Friction constraints
Contact constraints&lt;/li>
&lt;li>Cost Function:
Distance to goal&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Exp 2: Highly dynamic quadruped locomotion via whole-body impulse control and model predictive control&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Planning:
Ground reaction force&lt;/li>
&lt;li>Constraints:
kinematics constraints
Friction constraints
Contact constraints&lt;/li>
&lt;li>Cost Function:
Distance to the desired state&lt;/li>
&lt;/ul>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="img1_go1" srcset="
/news/unitree/img1_go1_hu6dbf6c0bdf7cbafa06397a36598fa4c7_6244158_7d826e257985a329b4ea70077cbd19b8.webp 400w,
/news/unitree/img1_go1_hu6dbf6c0bdf7cbafa06397a36598fa4c7_6244158_9d3b64836c8ad883736970e55ddb8bbd.webp 760w,
/news/unitree/img1_go1_hu6dbf6c0bdf7cbafa06397a36598fa4c7_6244158_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://lu-yidan.netlify.app/news/unitree/img1_go1_hu6dbf6c0bdf7cbafa06397a36598fa4c7_6244158_7d826e257985a329b4ea70077cbd19b8.webp"
width="760"
height="570"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p></description></item><item><title>I'm interning at the ASLP LAB! (Speech and Image Information Processing)</title><link>https://lu-yidan.netlify.app/news/aslp/</link><pubDate>Tue, 11 Oct 2022 00:00:00 +0000</pubDate><guid>https://lu-yidan.netlify.app/news/aslp/</guid><description>&lt;h2 id="aslpnpu">ASLP@NPU&lt;/h2>
&lt;p>Audio, Speech and Language Processing Group at Northwestern Polytechnical University (ASLP@NPU) is one of the active research teams in the wide area of audio, speech and language processing. Founded in 1995, the ASLP Lab conducts research in audio and speech signal processing, speech recognition, speech synthesis, speaker verification and multimodal (audio-visual) speech processing.
&lt;strong>HomePage of ASLP&lt;/strong>: &lt;a href="http://www.npu-aslp.org/english" target="_blank" rel="noopener">http://www.npu-aslp.org/english&lt;/a>&lt;/p>
&lt;h2 id="实验室简介">实验室简介&lt;/h2>
&lt;p>西北工业大学音频语音与语言处理研究组(ASLP@NPU)依托于空天地海一体化大数据应用技术国家工程实验室和陕西省语音与图像信息处理重点实验室。研究组成立于1995年，经过近20多年的快速发展，已形成了人机语音交互、语音与音频信号处理、音视频多模态信息处理、多媒体内容分析、机器学习等主要研究方向。核心成员包括四位教授与副教授、多位海外兼职教授和70余名硕博士研究生。
&lt;strong>实验室主页&lt;/strong>: &lt;a href="http://www.npu-aslp.org/" target="_blank" rel="noopener">http://www.npu-aslp.org/&lt;/a>&lt;/p>
&lt;h2 id="我最近的工作">我最近的工作&lt;/h2>
&lt;p>用一批唤醒词去训练一个模型，并将其部署在一个嵌入式平台上。目前先在地平线新一代AIoT芯片（旭日x3）上部署python版本，之后开发c的版本。
上述内容完成后部署一个识别的模型。&lt;/p></description></item></channel></rss>