While end users are today able to acquire genuine 3D gestures thanks to many input devices, they sometimes tend to capture only 3Dtrajectories, which are any 3D uni-path, uni-stroke gesture performed in thin air. Such trajectories with their(𝑥,𝑦,𝑧)coordinates could be interpreted as three 2D multi-stroke gestures projected on three planes, i.e.,𝑋𝑌,𝑌𝑍, and𝑍𝑋, thus making them admissible for established 2D stroke gesture recognizers. To address the question whether they could be effectively and efficiently recognized, four 2D stroke gesture recognizers, i.e., $P, $P+, $Q, and Rubine, have been extended to consider the third dimension:$𝑃3,$𝑃+3,$𝑄3, and Rubine3D. The Rubine-Sheng extension is also included. Two new variations are also created to investigate some sampling flexibility:$𝐹for flexible Dollar recognition and FreeHandUni. These seven recognizers are compared against three challenging datasets containing 3D trajectories, i.e., SHREC2019 and 3DTCGS in a user-independent scenario, and 3DMadLabSD, in both user-dependent and user-independent scenarios. Individual recognition rates and execution times per dataset and aggregated ones on all datasets show a comparable performance to state-of-the-art 2D multi-stroke recognizers, thus suggesting that the approach is viable. We suggest some usage of these recognizers depending on conditions. These results are not generalizable to other types of 3D mid-air gestures.