A significant portion of the Brazilian population consists of people with hearing or visual impairment. Multimodal interaction systems do not suit the needs of a mixed audience, since they are designed for a specific user profile. We propose a model for a Multimodal Human Computer Interaction System (MMHCI) based on services, embedded on an assistive robot, which is able to adapt the communication according to the type and degree of the user’s disabilities. The proposed approach emphasizes adaption of interaction channels according to the user’s needs. The experiments elicit positive user attitudes towards usability aspects of the system.