The primary symptom of both appendicitis and diverticulitis is a pain in the right lower abdomen; it is almost impossible to diagnose these conditions through symptoms alone. However, there will be misdiagnoses happening when using abdominal computed tomography (CT) scans. Most previous studies have used a 3D convolutional neural network (CNN) suitable for processing sequences of images. However, 3D CNN models can be difficult to implement in typical computing systems because they require large amounts of data, GPU memory, and extensive training times. We propose a deep learning method, utilizing red, green, and blue (RGB) channel superposition images reconstructed from three slices of sequence images. Using the RGB superposition image as the input image of the model, the average accuracy was shown as 90.98% in EfficietNetB0, 91.27% in EfficietNetB2, and 91.98% in EfficietNetB4. The AUC score using the RGB superposition image was higher than the original image of the single channel for EfficientNetB4 (0.967 vs. 0.959, p = 0.0087). The comparison in performance between the model architectures using the RGB superposition method showed the highest learning performance in the EfficientNetB4 model among all indicators; accuracy was 91.98% and recall was 95.35%. EfficientNetB4 using the RGB superposition method had a 0.011 ( p value = 0.0001) AUC score higher than EfficientNetB0 using the same method. The superposition of sequential slice images in CT scans was used to enhance the distinction in features like shape, size of the target, and spatial information used to classify disease. The proposed method has fewer constraints than the 3D CNN method and is suitable for an environment using 2D CNN; thus, we can achieve performance improvement with limited resources.