||Trust is an essential requirement for effective Human-Agent interaction as artificial agents are becoming part of human society in a social context. To blend into our society and maximize their acceptability and reliability, artificial agents need to adapt to the complexity of their surroundings, like humans. This adaptation should come through knowing whom to trust by evaluating the trustworthiness of its human mate. It is therefore required to build cognitive agents with trust models that may allow them to trust humans the same way a human trusts other humans keeping under consideration all factors influencing the human agent trust mechanism. Several antecedents within the cognitive system itself and the surroundings dynamically influence the trust mechanism. Personality, as a trusted antecedent has been found to have a substantial impact in predicting human interactor's trustworthiness that critically assists trust decision making. Current research, therefore, aims to infuse characteristics of respective humans as the antecedent of the human agent trust process. This is accomplished by incorporating into the trust model the agent's capability to perceive the personality traits of the human interactor. The current work is focused on introducing a trustworthiness assessment model (TAMFIS) based on fuzzy inference to assess human's trustworthiness towards artificial agents by exploring the human's personality traits that predict trustworthiness. The artificial agent could develop its character towards its human collaborators that will help it in effective interactions. The testing of the proposed architecture is carried out using Dempster Shafer Theory of belief and estimation. It is anticipated that the proposed trust model will effectively evaluate the trustworthiness of human collaborators and develop a more reliable human-agent trust relationship.