Powered by Roundtable
0xReclamation@bbs profile image
0xReclamation
Aug 24, 2023

Hey everyone! 

I wanted to share with everyone some of my findings on using AI for analysis on board quality, and the potential it shows for the Engagement Mining program.

I'll display my methodology here along with the results and my next areas to look at for refinement if we continue this idea.

Methodology

Extract posts & comments for processing

  1. Utilize BBS search to identify Top 5 posts within 56 days (https://bbs.market/bbs/METABBS/search/posts?q=*&tokenName=METABBS&sortBy=bestMatch&range=56)
  2. Sort posts by ‘Visits’
  3. Extract contents of top posts NOT created by an owner or moderator.
  4. Combine extractions into one file for AI utilization
  5. Add the board’s ‘About’ section for contextual information
  6. Repeat for each board

After extraction and data cleanup, several .TXT files remain such as (boardname)top5.txt along with the guidelies.txt which contains the publicly listed criteria & explanation for EM qualifications & guidelines.

Process AI Review w/ Claude AI

Claude AI allows for long-context prompts, giving users the ability to input several documents at once for context and comprehension, this is the AI model we will be using in our discovery.

  1. Upload the relevant TXT files, in this test we are using 'EMCriteria.txt', 'gamtechtop.txt', 'mwatalktop.txt','sativatop.txt', and 'theorytop.txt'. Only five uploads are currently allowed with Claude.
  2. Prompt Claude with the following instructions:

AI Results

NOTE: These are just TEST RESULTS, they are NOT indicative of my own opinions, and are not being considered as an accurate and complete analysis.

Areas for Improvement & Notes

  1. Top 5 posts were used in processing, need to process more posts to see if more thorough results can be achieved.
  2. Need to identify, or implement placeholders for, multimedia assets in extraction (images, videos, etc.) as Claude and other longer context LLMs can only process text.
  3. Prompting a singular board review at a time may result in better AI comprehension & more attention per board.- Long-context LLMs tend to be expensive, might be best to meet in the middle       somewhere here
  4. Results not completely accurate, likely due to a limited depth of content to review for each board, guidelines not being completely tuned for AI comprehension, and a need for more instruction iteration. 
  5. Ability to ‘continue’ the conversation for board owners could be very insightful. Long term opening up an AI quality assessor and assistant for board owners could be hugely impactful.

Next Steps & Further Testing

  1. Refine extraction process to allow for more comprehensive analysis (extract more top posts & comments and contextual information per board, implement placeholders for multimedia assets). Also future possibilities for further integration (i.e. board activity, mod activity, etc)
  2. Iterate further instructions, attempting both simple & comprehensive instructions. Possibly iterate instructions within the guidelines themselves.
  3. Iterate prompts with just one or two boards to find a good balance of depth for each
  4. Expand guidelines to be fully comprehensive and specific for AI review
  5. Prepare utilization of Claude API once prompting is refined

Conclusion

While this look shows just one test, we can see that harnessing the potential of AI in monitoring and refining digital community spaces offers transformative possibilities. As demonstrated, our approach goes beyond mere content analysis, bringing a deep understanding of user engagement and alignment with set guidelines. With further refinement and iterative learning, the potential for these tools grows exponentially. They can ensure that approved boards not only remain vibrant and engaging but also adapt with the evolving needs of their audiences. 

Further, there is massive potential in allowing board owners or community creators the ability to interface with an AI and communicate directly. While resources to utilize long-context AI models may prove prohibitive, the BBS model could theoretically integrate a way to use rewards for usage of the AI model at cost. 

In other tests, I've also given the AI model single posts to analyze alongside the METABBS Extended Guidelines and our Super Comment guidelines in order to have AI processed suggestions for tip amounts and individual post feedback. Tests there also showed a powerful understanding of the content and comments within posts, and tended to be close to a realistic tipping suggestion for the content.

AI analysis could be potentially integrated into several layers in the BBS Network system as I've mentioned above: At the higher level of Engagement Mining management and analysis, at the mid level of board management and review, and even at the individual level to review post quality, suggest tips, or even suggest moderation action. These integrations could be solely advisory, but there is also an option to give functional integration (i.e. the AI actually does something like take moderation action along with its report/advisory response). 

The potential here is near-limitless, and AI language models are advancing at an incredible rate. The comprehension and quality of analysis will only get better, and utilization of longer-context or higher quality models will inevitably become more and more cost effective. If we can achieve results like these with minimal testing in current models, what could we do with further refinement, or even more advanced models?

Let me know what you think about this test, the analysis, your suggestions, your feedback, or any other comments you may have!

These tests have been fascinating and I'm happy to share more if there are more specific requests as well. Claude still has yet to release their API, so there is time for testing and iteration before any functional exploration. Do you have a question for me to ask the AI? Let me know! Want to know more about another test or try out your own iteration? Let me know!

26