Multi-robot swarms utilize swarm intelligence to collaborate on tasks and play an increasingly significant role in a variety of practical scenarios. However, due to the complex design, multi-robot swarm systems often have vulnerabilities caused by logical errors, which can severely disrupt the normal operations of multi-robot swarms. Despite the significant security threats that logical vulnerabilities pose to multi-robot swarms, there are still considerable challenges in testing and identifying these vulnerabilities, and related research still faces two major challenges: 1) the explosion of input space for testing, 2) the lack of effective test-guidance strategies. Therefore, in this paper, we overcome the two major challenges mentioned above, and propose a formal verification method to discover logical flaws in multi-robot swarms. Specifically, we abstract linear temporal logic constraints of the swarm and compute swarm robustness based on these constraints thus guiding fuzzing, we call this approach LiTelFuzz (Fuzzing based on Linear Temporal Logic Constraints). The core idea of LiTelFuzz is to design a metric based on behavioral constraints to assess the state of the multi-robot swarm at different moments, and guide fuzz testing based on the assessment results. Based on this idea, we overcome the two challenges of excessive test case input space and the lack of fuzzing guidance. Consequently, we implement a single attack drone fuzzing scheme and a multiple attack drones scheme based on LiTelFuzz. These are named SA-Fuzzing and MA-Fuzzing, respectively. Finally, we tested three popular swarm algorithms using LiTelFuzz with an average success rate of 87.35% for SA-Fuzzing and 91.73% for MA-Fuzzing to find vulnerabilities. The success rate and efficiency are better than the existing state-of-the-art fuzzer SWARMFLAWFINDER.