Jump to content
Do Not Sell My Personal Information


  • Join Toyota Owners Club

    Join Europe's Largest Toyota Community! It's FREE!

     

     

Driverless Cars.


Bper
 Share

Recommended Posts

1 hour ago, Mjolinor said:

 

Let's change the problem slightly:

Run over a person or veer left over the cliff.

Will it self preserve?

Scary.

Hi John,Faced with a pedestrian or a cliff, driverless cars AVs prioritise pedestrian safety. Ethical programming and regulations typically value human life over the vehicle. However,the exact response might depend on the AV's programming and ability to assess the situation.:smile:

Link to comment
Share on other sites

2 hours ago, Haliotis said:

Sadly, this reaction sums it up very well.  The various agencies in favour of driverless cars always play upon their conviction that superior AI will keep the roads safer.  Whilst we all know that some drivers are idiots, the true facts are that most drivers are much more capable than we are led to believe, and these drivers are being insulted by the AI pundits.

Being deliberately ignored are the possible failings of roadside technology, and the vehicle system itself - whilst technology has become extremely reliable, there is still no such thing as absolute infallibility, and this would be a very necessary requirement for driverless vehicles AND the whole operating network.  Also, the ongoing charges for the car owner has not yet been mentioned - but I think it would be considerable.  Ongoing, too, would be the absolute necessity for the system (vehicles and network) to be maintained to faultless standards at all times - another increase in costs to the car owners.

Not yet mentioned is the high risks possible due to vandalism of roadside installations - evidence of this shows up on a daily basis, and the AI installations would be equally vulnerable.  Imagine something as simple as a camera lens being painted out!

AI pundits - dream on!!!

Hi Albert, these are valid concerns and overconfidence in AI is a risk. SDCs should focus on reducing accidents, not eliminating them entirely. While many drivers are skilled, SDCs can offer advantages in tricky situations. Addressing any potential failures in technology and ensuring security against vandalism are crucial. Clarity on costs and ongoing safety improvements are key to earning public trust. SDCs are seen to have potential, but development needs to prioritise any concerns.:smile:

Link to comment
Share on other sites

20 minutes ago, Bper said:

Machine learning could help AVs learn to adapt to these situations. Complete open decision making would build trust and allow for human intervention if needed..:smile:

Ah machine v machine.  One case would be the BMW AI and the Nissan AI.  Perhaps cars could be in a local network where they  'agree' a collective protocol.   Car A knows it will reach the roundabout before Car B on the right.  Cooperatively B will cede to A, unless B is a BMW Bot in which case .....

  • Like 1
Link to comment
Share on other sites

11 hours ago, Cyker said:

They can't because they can't do what a human would do - Pick the hidden 3rd option!

Not at the moment but they are certainly working on it.:smile:

Link to comment
Share on other sites

12 minutes ago, Roy124 said:

Ah machine v machine.  One case would be the BMW AI and the Nissan AI.  Perhaps cars could be in a local network where they  'agree' a collective protocol.   Car A knows it will reach the roundabout before Car B on the right.  Cooperatively B will cede to A, unless B is a BMW Bot in which case .....

Hi Roy,V2X is a fascinating idea for SDCs, a local network for traffic flow could improve efficiency and safety. Imagine cars coordinating maneuvers and avoiding collisions. Standardisation, network reliability, and interaction with human drivers are crucial. Your BMW vs Nissan question is funny, but universal communication is crucial. V2X has potential, but needs a great deal of work on these issues.:smile:

Link to comment
Share on other sites


52 minutes ago, Bper said:

Hi John,Faced with a pedestrian or a cliff, driverless cars AVs prioritise pedestrian safety. Ethical programming and regulations typically value human life over the vehicle. However,the exact response might depend on the AV's programming and ability to assess the situation.:smile:

"Three laws safe" (Asimov)

Forgive me for not wanting to be the one to test that out.

  • Haha 1
Link to comment
Share on other sites

Can you imagine:

Around the table in Brussels, chaired by some Eurocrat, who travels in a chauffeur driven limousine, the CEOs of the EU manufacturers and Eurocrats from member States, they thrash out the protocols.  Porsch  and other supercar makers insist that their cars have priority as they will waste less fuel by not having to brake and accelerate. 

Mercedes will be happy as long as all lights go Green as they approach. 

BMW that all lights go Red as they approach but their lane goes Green.

Renault and Citroën have no opinion as their VDS will not be compatible with other SDC.

FIAT and Alfa Romeo wonder what protocols have to do with them.

 

  • Like 1
  • Haha 2
Link to comment
Share on other sites

37 minutes ago, Mjolinor said:

"Three laws safe" (Asimov)

Forgive me for not wanting to be the one to test that out.

John,that's a good point. While the Three Laws are a thought provoking idea, they're fictional and wouldn't work for driverless cars. The real problem lies in crafting legislation that assigns responsibility in accidents, also ensures clear communication between car and user, and defines the ethical considerations for the car's decision making process.:smile:

  • Like 1
Link to comment
Share on other sites

For the reasons I previously stated, in my honest opinion I do not believe that autonomous driving will ever be truly safe. Apart from technical failure, if AI misunderstands something it cannot think about the situation, probably needed to be done in milliseconds, in the same way that a human being can.  Where AI would have to check and respond to a number of “if/then” decisions, a human being acts on the final decision immediately, because he/she can “read” the whole scene before them in one “take”.

In defending the ability of AI, the one danger that must be avoided is to not underestimate the capabilities of the human mind.  And do not forget that, in the first instance, the abilities of AI are developed by humans.

  • Like 1
Link to comment
Share on other sites

52 minutes ago, Roy124 said:

Can you imagine:

Around the table in Brussels, chaired by some Eurocrat, who travels in a chauffeur driven limousine, the CEOs of the EU manufacturers and Eurocrats from member States, they thrash out the protocols.  Porsch  and other supercar makers insist that their cars have priority as they will waste less fuel by not having to brake and accelerate. 

Mercedes will be happy as long as all lights go Green as they approach. 

BMW that all lights go Red as they approach but their lane goes Green.

Renault and Citroën have no opinion as their VDS will not be compatible with other SDC.

FIAT and Alfa Romeo wonder what protocols have to do with them.

 

Driverless car legislation is like a ride through a legislative amusement park. Imagine this lobbyists from car companies fighting it out in a political playground, each insisting their car deserves special treatment, one demands all the green lights because of 'German engineering,' while another argues for speed limits to be a thing of the past in the name of 'fuel efficiency.' Meanwhile politicians, many of whom haven't parallel parked since the '80s, are grappling with the difference between a lane change and a moral dilemma. It's like watching a robot uprising, except instead of lasers, it's a barrage of PowerPoint presentations so mind numbingly dull, you'll find yourself secretly hoping for the robots to just take over already.:laugh:

  • Like 2
Link to comment
Share on other sites

25 minutes ago, Haliotis said:

For the reasons I previously stated, in my honest opinion I do not believe that autonomous driving will ever be truly safe. Apart from technical failure, if AI misunderstands something it cannot think about the situation, probably needed to be done in milliseconds, in the same way that a human being can.  Where AI would have to check and respond to a number of “if/then” decisions, a human being acts on the final decision immediately, because he/she can “read” the whole scene before them in one “take”.

In defending the ability of AI, the one danger that must be avoided is to not underestimate the capabilities of the human mind.  And do not forget that, in the first instance, the abilities of AI are developed by humans.

Hi Albert,while AI and autonomous driving show progression, safety concerns remain. AI obviously can't match human intuition and quick decision making, especially in unexpected situations. It's right to point out that humans create AI, and human drivers have unique skills AI cannot fully replicate. We need to balance AI's potential with human strengths for safer transportation if it is ever to be implemented.:smile:

Link to comment
Share on other sites

50 minutes ago, Haliotis said:

 if AI misunderstands something it cannot think about the situation, probably needed to be done in milliseconds, in the same way that a human being can.  Where AI would have to check and respond to a number of “if/then” decisions, a human being acts on the final decision immediately, because he/she can “read” the whole scene before them in one take 

With an accident occurring when they get that take wrong.

Interesting yesterday.   I was in lane 2 travelling at 70.  A short distance ahead was one of those long, low lorries, 4 or 6 wheels.  There was no load just low greyish coloured side boards.  Given the road contours only the boards were visible. 

The lorry was at right angles to the traffic flow and seeking a right turn into the opposite traffic at a legal gap.  The whole of the body blocked lane 2.

I think AI/radar would probably have seen it before me.

  • Like 1
Link to comment
Share on other sites

26 minutes ago, Roy124 said:

With an accident occurring when they get that take wrong.

Interesting yesterday.   I was in lane 2 travelling at 70.  A short distance ahead was one of those long, low lorries, 4 or 6 wheels.  There was no load just low greyish coloured side boards.  Given the road contours only the boards were visible. 

The lorry was at right angles to the traffic flow and seeking a right turn into the opposite traffic at a legal gap.  The whole of the body blocked lane 2.

I think AI/radar would probably have seen it before me.

Roy,You're right, accidents can happen even with advanced technology. However, in your lorry situation, an SDC's sensors might have an advantage. Radar could potentially detect the lorry earlier than a human driver, especially if visibility was limited.:smile:

  • Like 1
Link to comment
Share on other sites

It will lead to virtual couplings between cars so if one makes an emergency stop it will cause the 10 cars behind to do the same. Railways solved that particular problem with actual couplings but electronics is now fast enough to have that virtual link.

 

  • Like 1
Link to comment
Share on other sites


1 minute ago, Mjolinor said:

It will lead to virtual couplings between cars so if one makes an emergency stop it will cause the 10 cars behind to do the same.

In the case of your line cars, there is no need for all to make an emergency stop.  They will be so spaced that each can apply with just sufficient braking that the last might be able to roll to a gentle stop.

The real life version which I first read about and then witnessed from lane 1 on the M5 (2 lanes in those days) had the line of cars first accelerating and the braking.  Allowing for reaction times at the back of the line one car would be braking hard at the same time the following car was accelerating.  The collision was dramatic,  steam, dust, and the front seat passenger wet himself. 

Link to comment
Share on other sites

I was fortunate to have a good driving instructor, who made  a point about continuing one’s learning curve for as long as one continues to drive.  I did learn about watching the road ahead as far as one could see, and always maintaining a safe distance so that one could stop in a controlled manner even if the vehicle ahead braked hard.  I believe that, keeping to these habits, it is probable that the human eye will see an obstruction ahead before AI detects it.

Admittedly, the radar of AI will “see” the obstruction first in very poor visibility but, on the basis that you should only drive at a speed where you can stop safely within your visibility range, then you will still be able to stop safely, regardless of the radar detecting the obstruction first.

Clearly, for those drivers who do not observe these rules, AI would probably save them from a collision in poor visibility or lack of concentration.

My car has adaptive cruise control, and I frequently experiment with its behaviour towards a vehicle in front.  It does slow down when closing up on a vehicle in front, but I would not rely on this as a matter of course, just as I would not relax and trust the technology in a driverless car.

Yes, I may be of the “old school”, but I would advise a younger driver to retain his/her concentration and not rely on AI to keep them safe.

  • Like 3
Link to comment
Share on other sites

1 hour ago, Roy124 said:

In the case of your line cars, there is no need for all to make an emergency stop.  They will be so spaced that each can apply with just sufficient braking that the last might be able to roll to a gentle stop.

The real life version which I first read about and then witnessed from lane 1 on the M5 (2 lanes in those days) had the line of cars first accelerating and the braking.  Allowing for reaction times at the back of the line one car would be braking hard at the same time the following car was accelerating.  The collision was dramatic,  steam, dust, and the front seat passenger wet himself. 

Hmm.

check out the end of this video to validate that statement

  • Like 1
Link to comment
Share on other sites

Driverles cars haven't exactly been a huge success in the US. With 25 crashes in the last 5 yrs in Calafornia

and 64 injuries.  I don't think I'd fancy being a passenger in one.

Much is made of AI, but it's only as good as the person who programmed it.

  • Like 3
Link to comment
Share on other sites

6 hours ago, Bper said:

Not at the moment but they are certainly working on it.:smile:

The point is they can't - You can only program a computer for things you know about or can think of in advance - Humans can adapt and make stuff up on the fly in the spur of the moment - Computers can't.

For instance if the choice given was run over an old lady or a bunch of school children, the computer would only pick between those two. A human could go for, say, drive off the road and on the verge to avoid both, or crash into another sturdy looking car instead, but NOT crash into the many spilled barrels marked 'Danger TNT-High Explosive Hazard'. The computer wouldn't even know those were options and there's no way we could realistically program such scenarios in.

 

1 hour ago, Haliotis said:

I was fortunate to have a good driving instructor, who made  a point about continuing one’s learning curve for as long as one continues to drive.  I did learn about watching the road ahead as far as one could see, and always maintaining a safe distance so that one could stop in a controlled manner even if the vehicle ahead braked hard.  I believe that, keeping to these habits, it is probable that the human eye will see an obstruction ahead before AI detects it.

Admittedly, the radar of AI will “see” the obstruction first in very poor visibility but, on the basis that you should only drive at a speed where you can stop safely within your visibility range, then you will still be able to stop safely, regardless of the radar detecting the obstruction first.

Clearly, for those drivers who do not observe these rules, AI would probably save them from a collision in poor visibility or lack of concentration.

My car has adaptive cruise control, and I frequently experiment with its behaviour towards a vehicle in front.  It does slow down when closing up on a vehicle in front, but I would not rely on this as a matter of course, just as I would not relax and trust the technology in a driverless car.

Yes, I may be of the “old school”, but I would advise a younger driver to retain his/her concentration and not rely on AI to keep them safe.

This kinda highlights why I don't like these systems - They will specifically help *bad* drivers - i.e. Drivers that probably shouldn't be allowed to drive in the first place. For good drivers, they will more likely be an impediment due to the false positives the system will no doubt generate regularly.

So what we're effectively doing is reducing the skill bar and making the roads more dangerous by encouraging an increase in bad drivers instead of improving the standard of driving!

  • Like 1
Link to comment
Share on other sites

19 minutes ago, Cyker said:

The point is they can't - You can only program a computer for things you know about or can think of in advance - Humans can adapt and make stuff up on the fly in the spur of the moment - Computers can't.

For instance if the choice given was run over an old lady or a bunch of school children, the computer would only pick between those two. A human could go for, say, drive off the road and on the verge to avoid both, or crash into another sturdy looking car instead, but NOT crash into the many spilled barrels marked 'Danger TNT-High Explosive Hazard'. The computer wouldn't even know those were options and there's no way we could realistically program such scenarios in.

 

This kinda highlights why I don't like these systems - They will specifically help *bad* drivers - i.e. Drivers that probably shouldn't be allowed to drive in the first place. For good drivers, they will more likely be an impediment due to the false positives the system will no doubt generate regularly.

So what we're effectively doing is reducing the skill bar and making the roads more dangerous by encouraging an increase in bad drivers instead of improving the standard of driving!

You make a good  point about the challenges of programming for unforeseen circumstances. Driverless cars can't handle the hidden 3rd option situations as well as humans who can adapt on the fly.

This is a crucial limitation of current technology. They can only react to scenarios they've been trained on, and the swerve onto the verge to avoid both pedestrians and explosives example is a perfect illustration. Humans excel at those split second, creative decisions that fall outside programmed response.While driverless cars might not be perfect, they could significantly reduce accidents caused by common human mistakes like speeding, drunk driving, or distracted driving.

 Machine learning is constantly getting better. Driverless cars might be able to learn and adapt to some unforeseen situations in the future.The run over an old lady or children scenario is difficult, and it's not a clear cut decision for humans either. Driverless cars could be programmed with specific ethical frameworks to guide their decisions in unavoidable situations. Driverless car technology is still in its infancy. Your point about unforeseen events is valid and should be a major focus for developers. They might not achieve human level adaptability soon, but they have the potential in the future to make roads safer by minimising human error accidents.:smile:

  • Like 2
Link to comment
Share on other sites

Personally I think they're going about this all wrong - What I would like is something like the threat assessment system in the Eurofighter - It dynamically assesses all threats, marks them in the HUD along with useful and helpful information, but also prioritises the display of them all in order of threat level and importance to avoid overloading the pilot by bombarding them with too much data.

Data overload is already becoming an issue with modern cars when there're 20 different beeps and stuff flashing on and off the dash before you can even notice, to the point these warnings are practically useless. The car industry could really learn a thing or two about human interface design and ergonomics from the aviation industry - Even things like making controls feel different so they can be intuited by touch alone, rather than the current car trend of making everything look the same for purely aesthetic reasons at the expense of usability.

Having the car alert and show me a potential danger, like a small child hidden behind a car that the radar can see but I can't, and boxing it on a full HUD so I can intuitively know what's happening and do something about it safely and in advance would be FAR more favourable than e.g. just arbitrarily taking control away from me by e.g. slamming the brakes on suddenly and causing the car behind to crash into me, which is more the direction we seem to be headed in.

 

  • Like 1
Link to comment
Share on other sites

When it comes to machines minimising accidents due to human error, “minimising” is the operative word. When a driver makes a mistake, he will usually take corrective action immediately and then, as the unfolding scene changes, he may make one or more decision changes in milliseconds.  These changes could be the result of affected driver(s) reacting to the primary driver’s mistake.  All of this can happen, with several drivers involved, at a speed which a machine cannot hope to deal with, especially when several drivers may be simultaneously revising their decisions.  Such a situation might even be worsened by the machine causing an accident that could otherwise have been avoided.

  • Like 1
Link to comment
Share on other sites

I know someone's going to post and say nonsense, a computer has far faster reactions than a human, but I get what you mean - For instance with my junction issue, it takes the car approx 2 seconds to decide that I am not, in fact, going to crash into something and give me accelerator control back - Something I'd already determined far earlier. Also, 2 seconds is a very very long time to wait when there are cars approaching you from both directions at 30mph!! :eek: 

 

Link to comment
Share on other sites

41 minutes ago, Cyker said:

Personally I think they're going about this all wrong - What I would like is something like the threat assessment system in the Eurofighter - It dynamically assesses all threats, marks them in the HUD along with useful and helpful information, but also prioritises the display of them all in order of threat level and importance to avoid overloading the pilot by bombarding them with too much data.

Data overload is already becoming an issue with modern cars when there're 20 different beeps and stuff flashing on and off the dash before you can even notice, to the point these warnings are practically useless. The car industry could really learn a thing or two about human interface design and ergonomics from the aviation industry - Even things like making controls feel different so they can be intuited by touch alone, rather than the current car trend of making everything look the same for purely aesthetic reasons at the expense of usability.

Having the car alert and show me a potential danger, like a small child hidden behind a car that the radar can see but I can't, and boxing it on a full HUD so I can intuitively know what's happening and do something about it safely and in advance would be FAR more favourable than e.g. just arbitrarily taking control away from me by e.g. slamming the brakes on suddenly and causing the car behind to crash into me, which is more the direction we seem to be headed in.

 

You're right, current car interfaces can be overwhelming. Driverless cars with their emphasis on automation, risk complicating this problem. Your Eurofighter analogy highlights how a different approach is possible. Similar to the DASS system, driverless cars should prioritise threats. A child behind a parked car  should be highlighted more urgently than a minor lane departure. A head up display (HUD) can visually represent threats, minimising distraction and allowing drivers to react calmly. Imagine a highlighted child icon instead of a generic warning light.Modern cars bombard drivers with alerts, rendering them useless, driverless cars need well designed systems that filter information, presenting only critical details. Car interiors could have buttons with different textures or shapes, allowing drivers to interact by touch without looking away. This reduces mental strain and improves safety.

Sudden autonomous braking can be dangerous, especially on narrow roads. The ideal system should warn and assist the driver, allowing them to make informed decisions and maintain some level of control.The UK car industry can learn from the British aviation industry's human machine interface design. Prioritising clear information delivery, intuitive controls, and shared control can make driverless cars safer and more user friendly.By adopting a Eurofighter inspired approach,driverless cars could become a valuable tool for enhancing safety and creating a more positive user experience on British roads.:smile:

  • Like 1
Link to comment
Share on other sites

Well, the way things are going with the state of our roads, speed cameras everywhere and the price of fuel going ever upwards, we’ll soon see plenty of carless drivers…

  • Haha 4
Link to comment
Share on other sites

Latest Deals

Toyota Official Store for genuine Toyota parts & accessories

Disclaimer: As the club is an eBay Partner, The club may be compensated if you make a purchase via eBay links

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share







×
×
  • Create New...




Forums


News


Membership


  • Insurance
  • Support