AI Memory Problem
One of the main problems we encoutered was the "lack of memory" of the model we had the access to.
Given the models we had with our plan, we couldn't acess the model wich could "remember" the previous parts of our dialogue. To avoid this problem, we created a file text wich was updated every time a new sentence was written by us or given by the AI.
Once tha question was solved, the other problem was the number of tokens (the number of parts a sentence is made of) the model could manage (GPT-j and Fast-GPT-J: 1.024; GPT-Neox-20b: 2.048); the solution adopted was making a summarized version of the dialogue using the Bart Large CNN model (always provided by NLP Cloud), create a new file were the new text was uploaded and continue the conversation on thath new document.