As modal music, Byzantine music groups its pieces into eight modes called echoi. Analogous to other modal systems, echos classification is neither unique nor universal, and it remains an open question on what grounds classifications have been made historically. Taking a Byzantine music dataset with 400 music pieces from three music sources, we perform computational classifications into echos using three content features: pitches, intervals, and Byzantine signs (called voiced units, which represent melodic movement). Furthermore, we extract phrase groupings and the distribution of voiced units into syllables. As in prior work, this research confirms that the choice of patterns' length affects the results. Inspired by Cornelissen et al.’s research (2020), we repeat the classification for n-syllables, which present better performance. We use tf–idf frequency for the creation of the models, bootstrap sampling to limit the impact of outliers on the results, and two classifiers (k-nearest neighbour and random forests) for cross-checking. We examine the validity of definitions and observations of the historical theory of Byzantine music, by repeating the classification with a modified training set derived through carefully selected elimination strategies. We observe (a) high accuracy in the classification with the full dataset, (b) the n-syllables perform better than n-grams for pitch and voiced unit attributes, and (c) pitch has the greatest impact on the echos identification, as expected. Finally, we discuss the results in relation to the historical theory of Byzantine music and the corresponding works on Gregorian music.