18 comments

  • RobotToaster 1 hour ago
    https://raw.githubusercontent.com/apple/ml-sharp/refs/heads/...

    "Exclusively for research purposes" so not actually open source.

    • ffsm8 1 hour ago
      The readme doesn't claim its open source either from what I can tell. Seems to be just a misguided title by the person who submitted it to HN

      The only reference seems to be in the acknowledgement, saying that this builds ontop of open source software

    • andy99 59 minutes ago
      Meta’s campaign to corrupt the meaning of Open Source was unfortunately very successful and now most people associate releasing the weights with open source.
      • singpolyma3 25 minutes ago
        Releasing weights is fine but you also need to be allowed to... Use the model :P
        • hwers 17 minutes ago
          You’re perfectly free to use it for private use, model output have been deemed public domain
          • ordersofmag 13 minutes ago
            Or you're free to use the output for commercial use if you can get someone else to use the tool to make the (uncopyrighted) output you want.
      • Blackthorn 53 minutes ago
        It's deliciously ironic how a campaign to dilute the meaning of free software ended up getting diluted itself.
        • sho_hn 43 minutes ago
          It's gratifying. I used to tilt at windmills on HN about this and people would be telling me with absolute condescension how the ship had sailed regarding the definition of Open Source, relegating my own life's work to anachronism.

          People slowly waking up to how daft and hypecycle misusing a term was all along has been amazing.

          • archerx 30 minutes ago
            The wildest one is how people say just because you produce open source software you should be happy that multibillion dollar corporations are leeching value from your work while not giving anything back but are in fact making your life harder. That’s the biggest piss on my back and tell me it’s raining bullshit I ever heard and makes me not want to open source a damn thing without feeling like a fool.
            • coliveira 8 minutes ago
              I think exactly like this. If I created a tool and it were used for free by billion dollar corporations to enrich themselves, I would consider it a personal loss.
      • ProofHouse 29 minutes ago
        Thank you! Shame all these big corps that do this forever. Meta #1, Apple # 2, psuedo fake journalists # 3
    • zarzavat 1 hour ago
      There's no reason to believe that weights are copyrightable. The only reason to pay attention to this "license" is because it's enforced by Apple, in that sense they can write whatever they want in it, "this model requires giving ownership of your first born son to Apple", etc. The content is irrelevant.
    • thebruce87m 22 minutes ago
      I’m going to research if I can make a profitable product from it. I’ll publish the results of course.
      • eleventyseven 18 minutes ago
        Pretty sure this is a joke, but the actual license is written by lawyers who know what they are doing:

        > “Research Purposes” means non-commercial scientific research and academic development activities, such as experimentation, analysis, testing conducted by You with the sole intent to advance scientific knowledge and research. “Research Purposes” does not include any commercial exploitation, product development or use in any commercial product or service.

    • sa-code 1 hour ago
      Should the title be corrected to source-available?
      • RobotToaster 8 minutes ago
        "weights-available" is probably the correct term, since it doesn't look like the training data is available.
    • LtWorf 35 minutes ago
      When AI and open source is used together you can be sure it's not open source.
    • echelon 1 hour ago
      That sucks.

      I'm writing open desktop software that uses WorldLabs splats for consistent location filmmaking, and it's an awesome tool:

      https://youtube.com/watch?v=iD999naQq9A

      This next year is going to be about controlling a priori what your images and videos will look like before you generate them.

      3D splats are going to be incredibly useful for film and graphics design. You can rotate the camera around and get predictable, consistent details.

      We need more Gaussian models. I hope the Chinese AI companies start building them.

    • hwers 1 hour ago
      I don’t agree with this idea that for a model to be open source you have to be able to make a profit off of it. Plenty of open source code licenses doesn’t require that constraint
      • tremon 1 hour ago
        https://opensource.org/osd#fields-of-endeavor

        > The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, [..]

      • Aachen 1 hour ago
        That's source-available: you get to see the code and learn from it, but if you're not allowed to use it however you want (with as only common restrictions that you must then credit the creator(s) and also allow others the same freedom on derivative works) then it's not the traditional definition of open source
      • cwillu 1 hour ago
        And you would be wrong as a simple question of fact.
      • wahnfrieden 1 hour ago
        The only popular one I know is CC-NC but that is not open source
  • neom 1 hour ago
    • the8472 1 hour ago
      imo https://x.com/SadlyItsBradley/status/2001227141300494550 is a better demo than their own project page
    • pcurve 14 minutes ago
      The authors appear to be all foreign-born.

      Just curious for those who are informed on this matter... are most research done by foreign born people? What happened to the big STEM push?

      I don't mean to stir up political debate... just curious what the reality is, especially given the decline in foreign students coming over in recent year.

      • foota 0 minutes ago
        I'm not trying to be too pc, but you can't really tell based on someone's name where they were born.

        That said, the US only has some 5% of the worlds population (albeit probably a larger proportion of the literate population), so you'd only expect some fraction of the world's researchers to be US born. Not to mention that US born is an even smaller fraction of births (2.5-3%, by Google), so you'd expect an even smaller fraction of US born researchers. So even if we assume that we're on par with peer countries, you'd only expect US born researchers to be a fraction of the overall research population. We'd have to be vastly better at educating people to do otherwise, which is a longshot.

        Obviously this makes turning away international students incredibly stupid, but what are we to do against stupidity?

      • saagarjha 0 minutes ago
        1. People with foreign sounding names may have been born in the United States.

        2. People who were born outside the United States but moved here to do research a while back don’t suddenly stop doing research here.

  • analog31 26 minutes ago
    I wonder if it helps that a lot of people take more than one picture of the same thing, thus providing them with effectively stereoscopic images.
  • bertili 1 hour ago
  • cromulent 1 hour ago
  • jtrn 1 hour ago
    I was thinking of testing it, but I have an irrational hatred for Conda.
    • optionalsquid 1 hour ago
      You could use pixi instead, as a much nicer/saner alternative to conda: https://pixi.sh

      Though in this particular case, you don't even need conda. You just need python 3.13 and a virtual environment. If you have uv installed, then it's even easier:

          git clone https://github.com/apple/ml-sharp.git
          cd ml-sharp
          uv sync
          uv run sharp
    • moron4hire 1 hour ago
      You aren't being irrational.
    • jtreminio 1 hour ago
      You can simply use a `uv` env instead?
  • victormustar 48 minutes ago
  • d_watt 1 hour ago
    I’ve been using some time off to explore the space and related projects StereoCrafter and GeometryCrafter are fascinating. Applying this to video adds a temporal consistency angle that makes it way harder and compute intensive, but I’ve “spatialized” some old home videos from the Korean War and it works surprisingly well.

    https://github.com/TencentARC/StereoCrafter https://github.com/TencentARC/GeometryCrafter

    • sho_hn 41 minutes ago
      I would love to see your examples.
  • lvl155 40 minutes ago
    I don’t know when Apple turned evil but hard for me to support them further after nearly four decades. Everything they do now is directly opposite of what they stood for in the past.
    • tsunamifury 22 minutes ago
      Apple absolute Never believed in open source in the past so yes. They are not the same
  • hermitcrab 35 minutes ago
    "Sharp Monocular View Synthesis in Less Than a Second"

    "Less than a second" is not "instantly".

    • 0_____0 19 minutes ago
      If you're concerned by that, I have some bad news about instant noodles.
    • ethmarks 28 minutes ago
      What would your definition of "instantly" be? I would argue that, compared to taking minutes or hours, taking less than a second is fast enough to be considered "instant" in the colloquial definition. I'll concede that it's not "instant" in the literal definition, but nothing is (because of the principle of locality).
      • cubefox 3 minutes ago
        Wittgenstein, Philosophical Investigations, §88:

        > (...) Now, if I tell someone: "You should come to dinner more punctually; you know it begins at one o'clock exactly"—is there really no question of exactness here? because it is possible to say: "Think of the determination of time in the laboratory or the observatory; there you see what 'exactness' means"? "Inexact" is really a reproach, and "exact" is praise. (...)

  • gjsman-1000 1 hour ago
    Is this the same model as the “Spatial Scenes” feature in iOS 26? If so, it’s been wildly impressive.
    • alexford1987 21 minutes ago
      It seems like it, although the shipped feature doesn’t allow for as much freedom of movement as the demos linked here (which makes sense as a product decision because I assume the farther you stretch it the more likely it is to do something that breaks the illusion)

      The “scenes” from that feature are especially good for use as lock screen backgrounds

    • mercwear 1 hour ago
      I am thinking the same thing, and I do love the effect in iOS26
  • bbstats 26 minutes ago
    would love a multi-image version of this.
  • jokoon 1 hour ago
    does it make a mesh?

    doesn't seem very accurate, no idea of the result with a photo of large scene, that could be useful for level designers

  • burnt-resistor 39 minutes ago
    Damn. I recall UC Davis was working on this sort of problem for CCTV footage 20 years ago, but this is really freakin' progress now.
  • Invictus0 1 hour ago
    Apple is not a serious company if they can't even spin up a simple frontend for their AI innovations. I should not have to install anything to test this.
    • consonaut 1 hour ago
      It's included in the ios photo gallery. I think this is a separate release of the tech underneath.
      • londons_explore 19 minutes ago
        What user feature does it power?
        • givinguflac 9 minutes ago
          Literally what this model does- create seemingly 3d scenes from 2d images, in the iOS photos app. It works even better when you take a real spatial image, which uses dual lenses.
  • b112 1 hour ago
    Ah great. Easier for real estate agents to show slow panning around a room, with lame music.

    I guess there are other uses?? But this is just more abstracted reality. It will be innacurate just as summaried text is, and future peoples will again have no idea as to reality.

    • stevep98 1 hour ago
      It will be used for spatial content, for viewing in Apple Vision Pro headset.

      In fact you can already turn any photo into spatial content. I’m not sure if it’s using this algorithm or something else.

      It’s nice to view holiday photos with spatial view … it feels like you’re there again. Same with looking at photos of deceased friends and family.

    • tim1994 1 hour ago
      For panning you don't need a 3D view/reconstruction. This also allows translational camera movements, but only for nearby views. Maybe I am overly pedantic here, but for HN I guess thats appropriate :D
      • parpfish 1 hour ago
        For a good slow pan, you don’t need 3d reconstruction but you DO need “Ashokan Farewell”