In this paper, we evaluate a semiautonomous brain-computer interface (BCI) for manipulation tasks. In such a system, the user controls a robotic arm through motor imagery commands. In traditional process-control BCI systems, the user has to provide those commands continuously in order to manipulate the effector of the robot step-by-step, which results in a tiresome process for simple tasks such as pick and replace an item from a surface. Here, we take a semiautonomous approach based on a conformal geometric algebra model that solves the inverse kinematics of the robot on the fly, and then the user only has to decide on the start of the movement and the final position of the effector (goal-selection approach). Under these conditions, we implemented pick-and-place tasks with a disk as an item and two target areas placed on the table at arbitrary positions. An artificial vision (AV) algorithm was used to obtain the positions of the items expressed in the robot frame through images captured with a webcam. Then, the AV algorithm is integrated into the inverse kinematics model to perform the manipulation tasks. As proof-of-concept, different users were trained to control the pick-and-place tasks through the process-control and semiautonomous goal-selection approaches so that the performance of both schemes could be compared. Our results show the superiority in performance of the semiautonomous approach as well as evidence of less mental fatigue with it.