Have you ever been on a video call and wanted to draw a box-and-line diagram, like you would at a whiteboard?
Been frustrated how clunky mouse gestures are, compared to a pen?
Wanted to share and re-use parts of your diagram, easily, in your next one?
If only…
- whiteboard apps didn’t put your diagrams in their database
- drawing a box, or connecting two boxes, could happen with a gesture, at the speed of thought
- labelled boxes and connectors formed a semantic graph that could be re-used
Sounds hard? We have the basic pieces already:
- SOLID lets you store interconnected data, like diagrams, where you choose
- m-ld lets you collaborate in real-time on a shared RDF data structure (for example in the demo)
- (What you may not know) m-ld was motivated by an app just like this, which is on my shelf, and it already does shape recognition!
Like this:
What needs to be done, is to integrate these three pieces together, polish it up, and start thinking about the artificial intelligence possibilities…
What do you think?