Possibilities for cross-disciplinary interactive performance continue to grow as new tools are developed and adapted. Yet, the qualitative aspects of cross-disciplinary interaction have not advanced at the same rate. We suggest that new models for understanding gesture in different media will support the development of nuanced interaction for interactive performance. We have explored this premise by considering models for generating musical rhythmic gestures that enable implicit interaction between the gestures of a dancer and the generated music. We create and implement a model for generating dynamic rhythmic gestures that flow in, around, or out of goal points. Goal points can be layered and quantized to a meter, providing the rhythmic structure expected in music, while the figurations enable the generated rhythms to flow with the performer responding to the more qualitative aspects of performer.