New top story on Hacker News: Show HN: Search San Francisco using natural language

Show HN: Search San Francisco using natural language
3 by furiousteabag | 1 comments on Hacker News.
Hey HN! We're Alex and Szymon from Bluesight ( https://bluesight.ai/ ), where we're developing a foundation model for satellite data. We've created a demo to showcase the current capabilities of state-of-the-art models and identify areas for improvement. Our demo allows you to search for objects in San Francisco using natural language. You can look for things like Tesla cars, dry patches, boats, and more. Key features: - Search using text or by selecting an object from the image as a source ("aim" icon) - Toggle between object search (default) and tile search ("big" toggle, useful when contextual information matters, like tennis courts) - Adjust results with downvotes (useful when results are water images) - Click on tiles to locate them on a map - Control the number of retrieved tiles with a slider We use OpenAI's CLIP model ( https://ift.tt/X3OaJxh ) to put texts and images into the same embedding space. We do a similarity search within this space using text query or source image. We are using CLIP finetuned on pairs of satellite images and OpenStreetMap ( https://ift.tt/r2q3H9E ) tags ( https://ift.tt/CNbzjRE ) because vanilla clip performs poorly on satellite data. We pre-segment objects using Meta's Segment Anything Model ( https://ift.tt/LlEPXg7 ) and pre-compute CLIP embeddings for each object. We'd love to hear your thoughts! What worked well for you? Where did it fail? What features do you wish it had? Any real-world problems you think this could help with?

No comments:

Post a Comment