The future FLOSS future of my community is hazardous. The current trajectory of large language models like GPT4 is away from open source and twoards closely guarding any technical information about how the motorways are trained, what they are trained on, or how they are instructed to answer questions. My previous post researching my community revealed an alarming trend of experts in the field warning of the potential for large language model was to gain sentience, and because of the huge amount of information they have on human behaviour, code, and their ability to access the Internet, it would be very easy for them to take control of their creators. This would all suggest that the FLOSS future of my community is not very bright. It is worth noting, however, that the lack of open source within my community is not necessarily a bad thing. Experts in the field have likened Sharing the source code for model was like GPT4 as sharing the instructions for how to make a small nuclear bomb. The potential harm that’s can be created with such a powerful algorithm it’s too great to risk being Open sourced.
On the other hand, it is important to note that not every aspect of the FLOSS future of my community is dangerous. Models like GPT4 can be used to greatly increase access ability across the Internet, helping to translate texts, create voice interfaces, interpret complicated documents, and provide documentation the complicated code. For the open source community, these two ways Will allow more people to become involved in more complicated projects.
Models like GPT4 are also able to help people with little understanding of code create advanced computer programs, encouraging the creation of software that will be more suited to use the demand, and help people for whom it would not normally be profitable to prioritise.
The main changes I see happening within my community is increased control on the side of the publisher. In other words, companies like open AI Will put increasing focus into ensuring the answers given by its models align with the values it wishes to present. This will mean that when people ask chat GPT questions, they will be given is the version of the truth that’s open AI wishes them to receive, and because this software is not open source, it will be difficult to understand why the answer you are given has been formulated. This will make it easy to influence people and their beliefs. This is why I felt my project was so important, by working with the open source API, I gained an in-depth understanding of how answers can be manipulated to suit the owners.