tldraw
@tldraw.com
5 months ago You can also read the source code at github.com/tldraw/tldraw.
💬 1
♻️ 0
❤️ 5
You can also read the source code at github.com/tldraw/tldraw. Check out tldraw.com to try it yourself. We're unshipping the fairies at the end of the month, so get em while you can! We also encouraged the agents more to request new context. This would create a new prompt to continue what they were doing before, but with new context about the canvas, todo list, and fairies, too. You can see this if you select three fairies and prompt the group. One fairy will be elected as the orchestrator, create the project, and wait for the other fairies to finish the work. In earlier builds, fairies had different personalities and talents. The orchestrator fairy would assign tasks based on the most appropriate worker fairy available, preferring to use a more creative fairy for some tasks and more operationally inclined fairies for other tasks. We ended up using a 'fairy management system', where one fairy would draft the plan, create the todos, assign the todos to other agents, and then coordinate their output with followup todos. For example, if three fairies began working on a wireframe with todos for a header, body and footer, they would often place these in completely arbitrary places relative to the others. There were many issues with this. What if two agents start working on the same task? Even if agents work on different tasks, how do they work together towards an overall goal? And in doing this, how do they coordinate between different parts of the overall goal? Our first attempt at orchestrating multiple fairies was to create a shared todo list for all of the agents and give it as context alongside screenshots and structured data about the canvas. How do you coordinate when agent spend most of their time 'underwater' and only briefly and intermittently ‘return to the surface’ for context? In a multi-agent system, things can change a lot while an agent is working. Another agent might be editing the canvas. Humans might be moving things around. For example, on tldraw, one person can draw something while speaking to their collaborator and looking at what they are drawing. An agent can only see the state of the canvas or communicate with other agents before and after it has finished drawing. Agents only receive new context when they're prompted. While they stream back their response, they can't get new context. So unlike humans who actively observe, communicate, and learn while working collaboratively, agents are essentially working blind. No real-time feedback. Getting multiple AI agents to work together on an infinite canvas means tackling a huge coordination problem. Here's what we learned while building our fairies feature (december only on tldraw dot com) Looking for last minute gift ideas for $50 or less? Give the gift of blazing fast NVMe drives. It's been a busy week of shipping ⬇️
#React2Shell PSA for anyone shipping Next.js / React Server Components: upgrade to a non-vulnerable version and rotate secrets after.
Mike Gualtieri shared what Netlify did platform-side, plus the exploit traffic trends we’re seeing: www.netlify.com/blog/ongoing... These two Systematic Review features are now live for Pro, Teams, and Enterprise users at elicit.com After you search, screen, and extract data from the papers in your Systematic Review, you can generate a report to give you an overview of the research.
Now, Elicit can use 80 papers (double the previous limit of 40) to generate that report, giving you a more comprehensive look into your research. Papers that fail any strict criteria will appear at the bottom of the screening results page with a label showing they were excluded based on strict criteria. You can still decide to manually include them if needed.
This gives you full control over your inclusion logic in Elicit. Traditional systematic reviews have strict inclusion/exclusion criteria, where any paper has to meet all criteria to be screened in.
Now, you can enable strict criteria within Elicit to achieve the same results and automatically screen out papers failing to meet a strict criterion. We’ve made two updates to make Systematic Reviews in Elicit more rigorous and defensible:
Strict screening criteria - Mark criteria as 'strict' to exclude papers that fail to meet them.
80-paper Systematic Review reports - The report at the end of your Systematic Review can use 2x the papers. Your SPA is invisible to ChatGPT and Perplexity.
One click changes that. 🪄
www.youtube.com/watch?v=BnVI... Coding agents are amazing.
We used machine translation + human review for Tuist's i18n. Traded some quality for no mixed-language UX.
But some translations broke our conventions, docs wouldn't build.
Fix? Tell the agent to use Weblate CLI, verify it compiles locally. Done. stuff that i did not know off the top of my head before working at val.town
- exactly how long a domain label can be (it's 63 characters)