Maybe that’s part of the problem? HDR implementation on my Samsung sets is garbage, I have to disable it to watch anything. Too bad too, because the picture is gorgeous without it.
Smart TV having absolutely horrible default settings and filters that ruin any viewing experience has little to do with HDR because the TV isn‘t even processing HDR images most of the time. That stuff is already mixed and there‘s not much any device can do to give you details in the darks and brights back. It‘s a much different story when you‘re actually processing real color information like in a video game. HDR should absolutely help you see in the dark here.
I WISH it was the default settings. I went through every calibration and firmware update I could find. Even the model specific calibrations on rtings.com. Nothing made a difference.
It appears to just be a flaw in Samsung’s implementation. After going through all the Samsung forum information, the only suggestion that’s guaranteed to work is “turn it off”.
I got a samsung monitor last year too (it was the cheapest hdr option and I keep seeing reddit praise them) and it has such a terrible hdr experience. When hdr is on either dark colors are light grayish, brights are too dark, darks are crushed, everything’s too bright or colors are over saturated. I’ve tried every combination of adjusting brightness / gamma from the screen and/or from kde but couldn’t figure out a simple way to easily turn down the brightness at night without some sort of image issue popping up. So recently I gave up and turned hdr off. Still can’t use the kde brightness slider without fucking up the image but at least the monitor’s brightness slider works now.
Also if there are very few bright areas on the screen it further decreases its overall screen brightness, which also affects color saturation bcz of course.
Also also just discovered freesync and vrr are two different toggles in two different menus for some fucking reason and if you only enable freesync like I did you get a flickering screen
I really wish there was a ‘no smart image fuckery’ toggle in the settings.
I didn’t really understand the benefit of HDR until I got a monitor that actually supports it.
And I don’t mean simply can process the 10-bit color values, I mean has a peak brightness of at least 1000 nits.
That’s how they trick you. They make cheap monitors that can process the HDR signal and so have an “HDR” mode, and your computer will output an HDR signal, but at best it’s not really different from the non-HDR mode because the monitor can’t physically produce a high dynamic range image.
If you actually want to see an HDR difference, you need to get something like a 1000-nit OLED monitor (note that “LED” often just refers to an LCD monitor with an LED backlight). Something like one of these: www.displayninja.com/best-oled-monitor/
These aren’t cheap. I don’t think I’ve seen one for less than maybe $700. That’s how much it costs unfortunately. I wouldn’t trust a monitor that claims to be HDR for $300.
When you display an HDR signal on a non-HDR display, there are basically two ways to go about it: either you scale the peak brightness to fit within the display’s capabilities (resulting in a dark image like in OP’s example), or you let the peak brightness max out at the screen’s maximum (kinda “more correct” but may result in parts of the image looking “washed out”).
See my “set 2” links above. (at the time) $3,200 8K television, “If you want the brightest image possible, use the default Dynamic Mode settings with Local Dimming set to ‘High’, as we were able to get 1666 nits in the 10% peak window test.”
Nope, it does have wide color gamut and high-ish brightness, wouldn’t buy unless reviews said it was ok. But it does some fuckery to the image I can only imagine could be to make non-hdr content pop on windows but ends up messing up the image coming from kde. I can set it up to look alright in either in a light or dark environment but the problem is I can’t quickly switch between them without fiddling with all the settings again.
Compared to my cooler master a grayscale gradient on it has a much sharper transition from crushed bright to gray but then gets darker much slower as well, to a point where unless a color is black it appears darker on the cm despite it having an ips screen. Said gray also shows up as huge and very noticable red green and blue bands on it, again unlike the cm which also has banding but at least the tones of gray are similiar.
Also unrelated but just noticed while testing the monitors, max sdr brightness slider of kde seems to have changed again. Hdr content gets darker on the last 200 nits while sdr gets brighter. Does anyone know anything about that? I don’t think that’s how it’s supposed to work
3 months edit: I might’ve been wrong about this. At the time I had both monitors connected to the motherboard (amd igpu) since the nvidia driver had washed out colors. Since the cooler master worked I assumed the amd drivers were fine. But a while back I ended up plugging both into the nvidia gpu and discovered that not only were the nvidia drivers fixed, but with it the Samsung didn’t have the weird brightness issue neither.
Edit edit: Even though the brightness is more manageable it’s still fucked. I’ve calibrated it with kde’s new screen calibration tool and according to it the brightness tops out at 250 nits. However it is advertised and benchmarked to go up to 600 and I’ve measured 800 ish using my phone sensor, and it looks much brighter than an sdr 200 nit monitor. Which makes me think even though it is receiving hdr signal, it doesn’t trust the signal to be actually hdr and maps sdr range to its full range instead; causing all kinds of image issues when the signal is actually hdr.
And just to make sure it’s not a linux issue I’ve tried it with windows 10 too. With amd gpu hdr immediately disables itself if you enable it and with nvidia gpu if you enable hdr all screens including ones not connected to it turn off and don’t work until you unplug the monitor and reboot. Cooler master just works
Yeesh sounds like your monitors color output is badly calibrated :/. Fixing that requires an OS level calibration tool. I’ve only ever done this on macOS so I’m not sure where it is on Windows or Linux.
Also in general I wouldn’t use the non-hdr to hdr conversion features. Most of them aren’t very good. Also a lot of Linux distros don’t have HDR support (at least the one I’m using doesn’t).
It’s one of those things where it looks good where in like the case of a video game, the GAME’s implementation of it is good AND your Console/PCs implementation is good AND your TV/Monitor’s implementation is good. But like unless you’ve got semi-deep pockets, at least one of those probably isn’t good, and so the whole thing is a wash.
“It was pretty much done, it was in final [quality assurance testing],” Free Radical founder and former studio director Steve Ellis told GamesIndustry.biz in 2012. “It had been in final QA for half of 2008, it was just being fixed for release. LucasArts’ opinion is that when you launch a game you have to spend big on the marketing, and they’re right. But at that time they were, for whatever reason, unable to commit to spending big. They effectively canned a game that was finished.”
They also say the controller mapping is a challenge in the emulation software, but doable. It’s the wii version so I bet the aiming and whatnot is going to be wonky when using a controller or kbm vs the other releases.
Worth noting that the wiimote just uses Bluetooth, so it doesn’t take any specialized equipment to connect to your computer. And Dolphin has built in support for it. The sensor bar was also just a pair of infrared LEDs; All of the actual “sensing” happened at the wiimote directly. So you can just throw a wireless sensor bar (like $15 on amazon) underneath your computer monitor, and it will work fine.
It really is a bloodbath in the tech sector. I don’t understand where these thousands of people are even going to go considering major companies are on hiring freeze
My pipe dream is a bunch of new indie studios forming out of all these layoffs and kicking publisher‘s asses on sales with new competent and passionate games.
…But I guess they‘d then probably sell to those publishers again and repeat…
The largest factor is lack of capital, which is something everyone is enduring due to the SVB collapse. This is a giant recession of the entire sector and I don’t see how it corrects any time soon.
While breathing is cool, I have some hope that it will start correcting this year or next.
The big thing is that the raised interest rates have helped to prevent a real recession. So the real question is when can they come back down. I hope it starts this year even though it’ll likely take years to go back to what they were pre-pandemic, if the go that low again.
It was pretty bad for a long time, but once they started letting users make their own hats and body models and shit, it got absurd. At least the games are usually just ripoffs, but the user-made catalog is just full of straight up model rips. I don't understand how they're not getting sued to oblivion for openly making money off of copyrighted material like that.
I don’t need to remember it. I’m in the middle of replaying Baldur’s Gate 1. But that was more of a complicated math formula to derive something that we can do much more simply. The hope and fear thing not only reminds me of that scam curriculum in Donnie Darko, it also doesn’t feel like an interesting tactical layer; it does the opposite by interfering with initiative in a way that I’m not a fan of.
It’s rooted in the light/dark side of the force from Star Wars tabletop, and kind of inherent to Star Wars is making out everything in the world to be light or dark as though it’s that simple, but hardly anything in life is.
I don’t think any designer has ever said it is from Star Wars, and it most definitely does not use them as Light Side/Dark Side or imposed morality. It’s inspired by the Genesys rpg system of degrees of success/failure and has narrative effects like “Yes, but” and “No, however”.
I’d seen it written up in other articles as coming from Star Wars, so perhaps it was that writer that was mistaken. I’ve watched them play, heard the rules explanations and such, and “yes, but” and “no, however” to skill checks aren’t solving some problem I’ve had in other systems.
Sure, it’s not solving anything, but IMO it’s fun giving the GM a tokenified response currency even though you pull off a success. I’ve seen a fair amount of backlash, but just feel portraying the dice mechanic as Star Wars is miles off base, when it adds a narrative prompt for success/failure (D&D does this with nat20/nat1).
I’ll grant you I’m not typically the GM. From your perspective, do you see it making things more interesting as a GM? Because as a player, it’s less up my alley, and the GM’s response currency without that system is whatever they want it to be, because they’re the GM.
It does, I think. It powers “lair actions”, gives powers like interrupting turn sequence, making multiple moves in sequence. When the GM has a pool of currency players can see, there’s an unsaid acknowledgement things are going wrong/badly, which helps fuel collaboration in the storytelling aspect. I can say that someone fails an attack, but on a fail with fear they miss the attack AND leave themselves open to a harsh counterattack, or perhaps lose their weapon. I can do all of this off the cuff in D&D because ‘GM said so’, but then the players can feel an adversarial relationship instead of collaborative, which is so much more encouraged in Daggerheart.
All entirely subjective, and at its core it’s still heroic fantasy same as hundreds of other systems and if you are put off by rolling two dice for metacurrency, it’s likely not for you.
That interrupting turn sequence part is the one that upsets me the most, and I’m not fond of the extra drag on pacing that the "yes, but"s and "no, however"s can have over time. If they are putting their weight behind it, I hope it’s resonating with others, but if they intend to ever replace their D&D with Daggerheart, I wouldn’t be thrilled with the substitution.
It comes from the FFG Star wars RPG system and its method of creating multiple success/failure conditions. It’s an entirely independent system to the light/dark side force mechanics.
That’s fair if it’s not solving a problem for you, but it does add something new that resonates with a lot of people (at least it did for me). I’m speaking from the Genesys side so I don’t know how daggerheart handles it, but I absolutely loved it. I found it made skill checks more collaborative, my table would suggest ideas for how to interpret the roll, and having more to ‘explain’ got people more descriptive in how they talk about their actions. We went from ‘I take a swing. Nope, that’s a miss’ to ‘failure with advantage, ok I go in with my axe but I can’t get through this guy’s defenses. For my advantage, I want to hook this guys shield with my axe so the next attacker gets a boost die to hit’.
It does make checks more involved, but I prefer fewer, more impactful checks as a general rule anyway.
I can see why the comparison to Genysis would exist now but I don’t think it’s a very worthwhile comparison to make in how they play out and are used in each system.
It’s interesting and it seems like a good change for people that have done a lot of d&d but it’s probably not going to be a complete replacement for 5e. It seems good for short campaigns but it only has one book out for now.
Tbf, a lot of people misjudged it, including Larian. I don't think a lot of people really believed the "choices and decisions matter" would work as well as it did. Prior to release, I read an article that talked about how it was gonna be neat that the in-game news would update based on your actions. Like, that was the noteworthy function to discuss about the game. "NPCs might talk about your actions in passing to each other".
Did Microsoft underestimate it more than others? Sure. But pretending like every corporation, including Larian, didn't underestimate it a whole lot is a bit crazy.
Edit: and isn't the game Divinity: Original Sin II? Did it have other names in other international markets?
Edit: this was submitted as a response to https://lemmy.world/comment/3615435 but Kbin didn't seem to actually tie them together. It shows me that it was written as a reply on Kbin, but seems to have lost connection to the comment hierarchy.
The degree of success couldn’t be predicted, sure. But larian is not a new studio, BG is a big ip, DOS2 was a big success, the witcher 3 was a tremendous success, and the game was in early access for 3 years so you could very easily gauge how it was going.
If a decider can’t see that coming at least as a significant possibility, they’re all clowns who don’t deserve more than the lowest wages.
Except virtually everyone got it wrong still. Even the head of Larian thought it'd top out at 100k max. That's currently it's average now with it's max being more than 800% higher.
BG is a big IP, but it's never had this level of success. Look at Diablo III's release (similar IP with a long break between games). It had better advertising campaign and still kind of became noise fairly quickly. Game news sites barely covered BG3 until it hit it big.
Microsoft definitely undershot, but it was likely basing it on a lot of the aggregated news as well. It had barely any coverage prior to its official release. This is usually a sign that the game will be mediocre.
Larian is a big studio but its last expected game from its really only known IP was cancelled after being put on hold for four years (granted BG3 was also being developed during this time). It's biggest games prior to this got at least partially funded on Kickstarter (not a knock against KS, but it's not generally seen as the sign of a strong studio to exec-types).
I don't blame an executive for not seeing this coming.
Executives obviously didn't see this coming. But neither did game journalists or even gamers.
Its a mistake in hindsight, but with what everyone generally knew at the time, it was the expectation of most.
There is a difference between misjudging the success and betting on the failure.
Did you read the paper? BG3 was assessed far below just dance or let’s sing ABBA! It was at the very bottom of the list!
I bought the game blind a year before release. Not to test it but because I knew were I was going. Of course I had big fears about it because many games pretended to be BG successors and I didn’t want to get my expectations too high. But I didn’t know anything about it because I didn’t want to spoil the surprise.
The information was there. I don’t know why journalists to whatever didn’t saw it coming but I was prepared for it being a big thing for me. It is litteraly their job to assess whether a game will work or not. They bet on failure. They couldn’t be more wrong, and I don’t think there was any sign of failure.
It was expected to be a second release after being a Stadia exclusive. This isn't judging quality, just impact.
Edit: and let's not pretend by adding "far below" when it was in the same group. And the ranking isn't even totally based on expected sales. The asking prices and the levels aren't in order. You're misinterpreting one quote entirely incorrectly and trying assuming too much from a chart.
I think it’s just an interesting story since we have actual internal emails from Microsoft that we wouldn’t have if it weren’t for the justice department’s lawsuit to stop the Activision buyout.
I'm well aware of that. That's why I named it. They said "Divinity of Sin 2". I was asking if they meant Divinity: Original Sin 2 and if it went by a different name in other markets. I thought that was clear. I'm not sure how you got to think I was asking what it is.
I honestly don't know how that interpretation was possible in the given context. It was mentioned in direct response to someone saying "Divinity of sin 2" and I corrected it.
Humans using past work to improve, iterate, and further contribute themselves is not the same as a program throwing any and all art into the machine learning blender to regurgitate “art” whenever its button is pushed. Not only does it not add anything to the progress of art, it erases the identity of the past it consumed, all for the blind pursuit of profit.
Me not knowing everything doesn’t mean it isn’t known or knowable. Also, there’s a difference between things naturally falling into obscurity over time and context being removed forcefully.
And then there’s when its too difficult to upkeep them, exactly like how you can’t know everything.
We probably ain’t gonna stop innovation, so we mine as well roll with it (especially when its doing a great job redistributing previously expensive assets)
If it’s “too difficult” to manage, that may be a sign it shouldn’t just be let loose without critique. Also, innovation is not inherently good and “rolling with it” is just negligent.
Where did the AI companies get their code from? Is scraped from the likes of stack overflow and GitHub.
They don’t have the proprietary code that is used to run companies because it’s proprietary and it’s never been on a public forum available for download.
Stable Diffusion uses a dataset from Common Crawl, which pulled art from public websites that allowed them to do so. DeviantArt and ArtStation allowed this, without exception, until recently.
Devil's advocate. It means that only large companies will have AI, as they would be the only ones capable of paying such a large number of people. AI is going to come anyway except now the playing field is even more unfair since you've removed the ability for an individual to use the technology.
Instituting these laws would just be the equivalent of companies pulling the ladder up behind them after taking the average artist's work to use as training data.
How would you even go about determining what percentage belongs to the AI vs the training data? You could argue all of the royalties should go to the creators of the training data, meaning no one could afford to do it.
How would you identify text or images generated by AI after they have been edited by a human? Even after that, how would you know what was used as the source for training data? People would simply avoid revealing any information and even if you did pass a law and solved all of those issues, it would still only affect the country in question.
Literally the definition of greed. They dont deserve royalties for being an inspiration and moving a weight a fraction of a percentage in one direction…
If AI art is stolen data, then every artists on earth are thieves too.
Do you think artists just spontaneously conjure up art? No. Through their entire life of looking at other people’s works, they learned how to do stuff, they emulate and they improve. That’s how human artists come to be. Do you think artists go around asking permission from millions of past artists if they can learn from their art? Do artists track down whoever made the fediverse logo if I want to make a similar shaped art with it? Hell no. Consent in general is impossible too because whole lot of them are likely too dead to give consent be honest. Its the exact same way AI is made.
Your argument holds no consistent logic.
Furthermore, you likely have a misunderstanding of how AI is trained and works. AI models do not store nor copy art that it’s trained on. It studies shapes, concepts, styles, etc. It puts these concepts into matrix of vectors. Billions of images and words are turned into mere 2 gigabytes in something like SD fp16. 2GB is virtually nothing. There’s no compression capable of anywhere near that. So unless you actually took very few images and made a 2GB model, it has no capability to store or copy another person’s art. It has no knowledge of any existing copyrighted work anymore. It only knows the concepts and these concepts like a circle, square, etc. are not copyrightable.
If you think I’m just being pro-AI for the sake of it. Well, it doesn’t matter. Because copyright offices all over the world have started releasing their views on AI art. And it’s unanimously in agreement that it’s not stolen. Furthermore, resulting AI artworks can be copyrighted (lot more complexity there, but that’s for another day).
It’s a tool that can be used to replicate other art except it doesn’t replicate art does it.
It creates works based on other works which is exactly what humans do whether or not it’s sapient is irrelevant. My work isn’t valuable because it’s copyrightable. On a sociopath things like that
What gives a human right to learn off of another person without credit? There is no such inherent right.
Even if such a right existed, I as a person who can make AI training, would then have the right to create a tool to assist me in learning, because I’m a person with same rights as anyone else. If it’s just a tool, which it is, then it is not the AI which has the right to learn, I have the right to learn, which I used to make the tool.
I can use photoshop to replicate art a lot more easily than with AI. None of us are going around saying Photoshop is wrong. (Though we did say that before) The AI won’t know any specific art unless it’s an extremely repeated pattern like “mona lisa”. It literally do not have the capacity to contain other people’s art, and therefore it cannot replicate others art. I have already proven that mathematically.
TF2 was great before they increased the player limit (I think that was before it became free to play?). It was a hero shooter with strategy and synergy. It became a spammy farm fest with too many items and too many players for what the maps were designed for.
One of the polygon commenters had it right: Valve should put the game at 30% discount for a week, where said 30% is their cut, so every sale in that meantime goes straight to the devs
polygon.com
Ważne