We try to be very mindful of the quality of the content that instaGrok returns. Especially when it comes to protecting younger students from seeing offensive content.
One of the ways in which we try to ensure safety is through our profanity filter algorithm, which automatically discards any results that contain offensive language. As with all our algorithms, it is continuing to improve. Unfortunately, it is less effective for videos, since the technology is not able to automatically analyze the visuals or dialogue in the video. It is hopeful to see that YouTube has an option to display auto-generated close captions for most videos, so whenever these captions become accessible programmatically we will be able to analyze the content of the videos, and hence filter, more effectively.
We also provide a method for teachers to report inappropriate content. This can be done by clicking on the trash can icon () next to each result. For general users or students, this action simply hides the item from their own results, but for teachers, it also shows a prompt asking whether they would like to report this item, e.g.:
Clicking Report on on this screen will cause the particular site to be removed from your and everyone else’s results (more specifically, it will be added to our blacklist database, together with the date the name of the user reported it, enabling us to review and follow up). Please be judicious in using this feature, because it affects other people!
We would love to hear your feedback on how we can continue to improve safety in instaGrok!