Compras a menudo en Amazon?? Pincha aquí!!!



ViewGrip: #1 Free Youtube Views, Likes & Subscribers




Powered by Restream.io


jueves, 15 de diciembre de 2022

Show HN: Natural language Twitter search using Codex https://ift.tt/w7Z9oAO

Show HN: Natural language Twitter search using Codex We built a structured search engine for Twitter called Bird SQL, available at https://ift.tt/z1mNh6f . Our search interface uses OpenAI Codex to translate natural language to SQL. Our backend then verifies the SQL, executes it, and displays the results on the web app. This makes large structured datasets like a scrape of Twitter easy for anyone to explore. As background, while working on text-to-SQL as a general problem, we came to believe one of its most powerful applications is as a search tool because: - SQL is hard to write by hand and prone to errors - It allows you to iterate quickly if you’re exploring a new dataset - A lot of contextual information that you’d normally have to internalize (e.g. your data’s schema) can be automatically generated and offloaded to the language model Using large language models (LLMs) like Codex to write the SQL for you means you don’t have to worry about the nitty gritty language details, but still benefit from the power of a language like SQL. Also, after seeing the results of the query, you can inspect (and if necessary, change) the SQL. The lack of this sort of explainability of the query result is one of the more notorious challenges of returning the output of an LLM directly to the user. Additionally, using LLMs in this way makes these kinds of queries over structured data accessible to people who know little or no SQL. While Bird SQL shares significant infrastructure with our more general LLM-powered search engine over unstructured data (Ask Perplexity - https://perplexity.ai[1] ), the two approaches and their respective challenges are quite different. For example, the type of models are different (GPT3.5 vs Codex), obviously the model prompts have different structure, and how to verify model output when its text vs when its code is different. We are currently exploring ways to combine the two approaches, such as using the results of retrieving information from a structured source (as in Bird SQL) as one of the inputs for the LLM to interpret or summarize (as in Ask Perplexity). We would love to hear your questions, suggestions, and feedback! [1] https://ift.tt/12Y3Ndt https://ift.tt/rECy8oX December 15, 2022 at 11:12PM

No hay comentarios:

Publicar un comentario

Escribe !emote y tu emote para unirte al juego

Watch video on YouTube here: https://youtu.be/3tGZ-bSpFWE