

Hey everyone!
I wanted to share with everyone some of my findings on using AI for analysis on board quality, and the potential it shows for the Engagement Mining program.
I'll display my methodology here along with the results and my next areas to look at for refinement if we continue this idea.
After extraction and data cleanup, several .TXT files remain such as (boardname)top5.txt along with the guidelies.txt which contains the publicly listed criteria & explanation for EM qualifications & guidelines.
Claude AI allows for long-context prompts, giving users the ability to input several documents at once for context and comprehension, this is the AI model we will be using in our discovery.
NOTE: These are just TEST RESULTS, they are NOT indicative of my own opinions, and are not being considered as an accurate and complete analysis.
While this look shows just one test, we can see that harnessing the potential of AI in monitoring and refining digital community spaces offers transformative possibilities. As demonstrated, our approach goes beyond mere content analysis, bringing a deep understanding of user engagement and alignment with set guidelines. With further refinement and iterative learning, the potential for these tools grows exponentially. They can ensure that approved boards not only remain vibrant and engaging but also adapt with the evolving needs of their audiences.
Further, there is massive potential in allowing board owners or community creators the ability to interface with an AI and communicate directly. While resources to utilize long-context AI models may prove prohibitive, the BBS model could theoretically integrate a way to use rewards for usage of the AI model at cost.
In other tests, I've also given the AI model single posts to analyze alongside the METABBS Extended Guidelines and our Super Comment guidelines in order to have AI processed suggestions for tip amounts and individual post feedback. Tests there also showed a powerful understanding of the content and comments within posts, and tended to be close to a realistic tipping suggestion for the content.
AI analysis could be potentially integrated into several layers in the BBS Network system as I've mentioned above: At the higher level of Engagement Mining management and analysis, at the mid level of board management and review, and even at the individual level to review post quality, suggest tips, or even suggest moderation action. These integrations could be solely advisory, but there is also an option to give functional integration (i.e. the AI actually does something like take moderation action along with its report/advisory response).
The potential here is near-limitless, and AI language models are advancing at an incredible rate. The comprehension and quality of analysis will only get better, and utilization of longer-context or higher quality models will inevitably become more and more cost effective. If we can achieve results like these with minimal testing in current models, what could we do with further refinement, or even more advanced models?
Let me know what you think about this test, the analysis, your suggestions, your feedback, or any other comments you may have!
These tests have been fascinating and I'm happy to share more if there are more specific requests as well. Claude still has yet to release their API, so there is time for testing and iteration before any functional exploration. Do you have a question for me to ask the AI? Let me know! Want to know more about another test or try out your own iteration? Let me know!