Our goal is to present a novel systematic approach for automatically assessing conversational alignment across turn-by-turn exchanges, examining linguistic behavior at increasing levels of abstractness through lexical, syntactic, and conceptual similarity. Using the latest advances in Python-based NLP tools, the procedure begins by taking conversational partners' turns and converting each into a lemmatized sequence of words, assigning part-of-speech tags and computing high-dimensional semantic vectors per each utterance. Words and part-of-speech tags are further sequenced into n-g! rams of increasing length (from uni- to quad-grams) to allow a range of linguistic structures to be examined. Lexical, syntactic, and conceptual alignment values are then calculated on a turn-by-turn basis as cosine scores. To showcase our approach, and to demonstrate its effectiveness in capturing turn-level linguistic alignment, we turn to a unique conversational context: one in which participants disagree or agree with each other about contentious sociopolitical topics, with the added element of one partner secretly taking a "devil's advocate" position. Our findings reveal that high-level intentional factors can modulate alignment processes consistently across multiple levels of linguistic abstraction. Contrary to previous findings on non-verbal coordination (Duran and Fusaroli, under review), deception disrupts verbal alignment, and alignment generally decreases over time. Moreover, for a subset of lexical and syntactic measures, this decrease is most pro! nounced in truth conversations. This suggests that a hypothesi! zed role of alignment, whereby mutual understanding is facilitated, is established early and is required less as truth conversations progress. Implications for current models of interpersonal coordination will be discussed.