Sign language is a form of communication used by the deaf and dumb. However, integrating them into society is challenging because most people do not speak their language. It is a method of generating and disseminating ideas, information, points of view, facts, and sentiments to establish a shared understanding. Unfortunately, almost 5% of the world's population is not blessed with the ability of verbal communication. The situation is the same for Bangladesh. This paper presents a simple, low-cost Bangla sign language (BdSL) recognition model with maximum accuracy to convert signs into Bangla text. We've manually captured images for Bangla sign language following the BdSL model and implemented the dataset to train the system and show output. We used neural networks as a deep learning method to train individual signs to achieve our goals in the suggested model. Using a CNN, we separated the dataset into two parts: training data and test data. The action is mapped to the relevant text in the training data using image processing techniques, and neural networks, so raw images or videos are turned into text that can be read and comprehended with accuracy near 92%. Finally, the system will turn the user's signs and motions into text. It is referred to as "sign language recognition." In the future, This research can also open doors to numerous other applications, like sign language tutorials or dictionaries, and can also help the deaf and dumb search the web or send emails more conveniently. We hope this system will be an excellent opportunity for people who have speaking or hearing disabilities by helping them explore more.