The efforts to understand the relationship of the meaning of a lexical piece to the object to which it refers, the relationship between the lexical pieces and the concepts, and the understanding of the meanings shared between two speakers who use the same lexical pieces have all constituted major problems for the semantic memory models. According to contemporary literature, perceptual-motor, linguistic, and social information have different weights in the formation of concepts, whether concrete or abstract, stored in the aforementioned memory. Regardless of the models developed so far, it is interesting to note that semantic knowledge is represented by various ways of relating the concepts and the types of relationships between them. In this context, studies in sign languages and comparative studies between spoken and sign languages are scarce. Thus, little is known about the effect of linguistic modality on the semantic networks. After all, the theory on semantic networks and norms for the production of features has been grounded on theories of language and its processing adjusted to spoken languages. As the incorporation of the sign language and the Deaf population has shown in other psycholinguistic and linguistic topics, the importance of including these languages and populations, and comparatives with spoken languages, might increase the explanatory power of the theory to account for the universal and contextual aspects of language and its processing. In this effort, there is a latent risk: the linguistic modality can be only a vehicle for more well-known or studied cross-modal variables (e. g., age of acquisition, functional distribution of language, size of the available lexicon, etc.). If it is considered that languages are not stored together, but similar processes can occur in them, it is essential to find out what may be a singular feature of each modality (spoken versus sign) that might ground differentiated processes. Considering the high iconicity of the sign languages and the possibility of a high concreteness of the lexical pieces in the sign language as distinctive features --not collapsible into well-known variables such as the aforementioned--, this article suggests a careful approach to avoid the aforementioned risk in the study of the effects of the linguistic modality (sign versus oral) in the organization of semantic memory. Since perceptual-motor and social information are the main sources of iconicity, a balanced instrument is necessary in the evocation of perceptual-motor, social, and linguistic information. Repeated free word association tasks seem like an appropriate paradigm for a suggested approach. The reasons for this are that, by not censoring the types of response, then free association tasks allow capturing all kinds of concepts (concrete or abstract), all kinds of semantic relationships/organization (paradigmatic versus thematic) and all kinds of processes (comparison versus interaction). This type of task therefore makes it possible to collect meanings related to linguistic information and non-linguistic experience because affective and experiential information is accessible by doing the task in different repetitions. The approach and the tool are exemplified by an ongoing comparative study between Deaf signing and hearing populations. The partial findings of this study also serve to focus on the expected effects of the difference in iconicity and the level of concreteness/abstractness of the lexical pieces of each linguistic modality; namely, the differences between an abstract and a concrete conceptualization of the conceptual domains. Taxonomic and introspective labels might appear as indicative of paradigmatic relationships, of a taxonomic organization, and of underlying comparison processes. On the other hand, the situational and entity labels, indicative of syntagmatic relationships, of a thematic organization and of underlying interaction processes, might suggest a predominantly concrete organization. [ABSTRACT FROM AUTHOR]