Objective To investigate the accuracy and reliability of augmented reality (AR) technique in locating the perforating vessels of the posterior tibial artery during the repair of soft tissue defects of the lower limbs with the posterior tibial artery perforator flap. Methods Between June 2019 and June 2022, the posterior tibial artery perforator flap was used to repair the skin and soft tissue defects around the ankle in 10 cases. There were 7 males and 3 females with an average age of 53.7 years (mean, 33-69 years). The injury was caused by traffic accident in 5 cases, bruising by heavy weight in 4 cases, and machine injury in 1 case. The size of wound ranged from 5 cm×3 cm to 14 cm×7 cm. The interval between injury and operation was 7-24 days (mean, 12.8 days). The CT angiography of lower limbs before operation was performed and the data was used to reconstruct the three-dimensional images of perforating vessels and bones with Mimics software. The above images were projected and superimposed on the surface of the affected limb using AR technology, and the skin flap was designed and resected with precise positioning. The size of the flap ranged from 6 cm×4 cm to 15 cm×8 cm. The donor site was sutured directly or repaired with skin graft. Results The 1-4 perforator branches of posterior tibial artery (mean, 3.4 perforator branches) in 10 patients were located by AR technique before operation. The location of perforator vessels during operation was basically consistent with that of AR before operation. The distance between the two locations ranged from 0 to 16 mm, with an average of 12.2 mm. The flap was successfully harvested and repaired according to the preoperative design. Nine flaps survived without vascular crisis. The local infection of skin graft occurred in 2 cases and the necrosis of the distal edge of the flap in 1 case, which healed after dressing change. The other skin grafts survived, and the incisions healed by first intention. All patients were followed up 6-12 months, with an average of 10.3 months. The flap was soft without obvious scar hyperplasia and contracture. At last follow-up, according to the American Orthopedic Foot and Ankle Association (AOFAS) score, the ankle function was excellent in 8 cases, good in 1 case, and poor in 1 case. Conclusion AR technique can be used to determine the location of perforator vessels in the preoperative planning of the posterior tibial artery perforator flap, which can reduce the risk of flap necrosis, and the operation is simple.
Brain-computer interface (BCI) based on steady-state visual evoked potential (SSVEP) have attracted much attention in the field of intelligent robotics. Traditional SSVEP-based BCI systems mostly use synchronized triggers without identifying whether the user is in the control or non-control state, resulting in a system that lacks autonomous control capability. Therefore, this paper proposed a SSVEP asynchronous state recognition method, which constructs an asynchronous state recognition model by fusing multiple time-frequency domain features of electroencephalographic (EEG) signals and combining with a linear discriminant analysis (LDA) to improve the accuracy of SSVEP asynchronous state recognition. Furthermore, addressing the control needs of disabled individuals in multitasking scenarios, a brain-machine fusion system based on SSVEP-BCI asynchronous cooperative control was developed. This system enabled the collaborative control of wearable manipulator and robotic arm, where the robotic arm acts as a “third hand”, offering significant advantages in complex environments. The experimental results showed that using the SSVEP asynchronous control algorithm and brain-computer fusion system proposed in this paper could assist users to complete multitasking cooperative operations. The average accuracy of user intent recognition in online control experiments was 93.0%, which provides a theoretical and practical basis for the practical application of the asynchronous SSVEP-BCI system.
This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects’ brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.