Amen. First of all, there are a zillion places where the line markings are either nonexistent or conflicting. My local freeway is being repaired and has two almost identical sets of line markings about 18" apart.
You really have to think to determine which are correct. And then there is weather. I can recall many rain storms where I just couldn't see the road any more and pulled under a bridge to chill until it got better.
At work we have a yearly event where all the senior engineering staff get together in an informal environment and during the day we have workshops and discuss serious stuff and in the evenings we do more light-hearted stuff. Typically for the opening session we have an external speaker who is typically somebody slightly unusual. Often we have crazy inventors or entrepreneurs or people who invented computer games, stuff like that.
About three years ago we had somebody a bit different. He was a senior developer from the research lab of a major European car manufacturer and he was talking about autonomous vehicles. In his presentation, we walked through real-life situations, sometimes frame by frame, to see how it worked. The software took the picture apart and identified all objects it could and labelled them. For example with 99.5% probability this is a tree. With 68% probability this is a dog, with 83% probability this is a discarded candy wrapper. With 97% probability this is a pedestrian. It was really amazing to see how the picture was analyzed and got most of it right. Sometimes it made hilarious mistakes of course. But it was only a prototype.
And after that there was a continuity and credibility filter. So if the previous stage had said, with 45% probability this is a dolphin, that would get overruled. Then there was also a category for stuff that couldn't be identified.
As a next step all these objects were graded by relevance concerning any risk they posed, any unexpected actions they might take, and any relevance they had on the next decision of the autonomous pilot.
For example if there is a road sign with writing on it, the software had to make a call whether the text should be analyzed and acted upon, for example, reduce speed. Or is it an advertising bill board that can safely be ignored. The location of the sign was also a factor here, but there were quite a few borderline cases. As such it doesn't really matter if one county or state uses a slightly different typeface or color because that's not the sole criterion.
The same with road markings. Lots of stuff was being picked up. Is this thing here a shadow or a crack in the road surface, or is it a marking? Or is it a stain? Lots of context is analyzed here and actions of other vehicles are observed too. For example there was one scene where a truck had stopped on the roadway to unload some stuff and the only way past it was to ignore road markings and cross over onto the other lane which was used by opposing traffic. The car thus had to decide to ignore the road marking and observe oncoming vehicles to identify an opportunity to pull out and pass the truck. Observing the actions of the cars in front also helped here. But it might be that the situation occurs without there being cars to copy so its all amazingly complex yet fascinating. In another scene there had been an accident and cops were directing traffic and the car had to do a left turn despite there being a no-left-turns sign.
All this was a prototype of course and is still under development. But I don't think any problems of this type will remain unresolvable.