Good Thought Good Action Foundation Draws People to the Outdoors Pinterest Tumblr By CBN Central Oregon draws people in to its community with the allure of the outdoors,affording a seemingly endless opportunity to participate in a variety of these andother non-traditional sports (rock climbing, martial arts, cycling, skiing, horsebackriding and snowboarding).The Good Thought Good Action Foundation is a foundation created to support the students of our community who choose to participate in these sports. They offer training scholarships to keep them involved and to take their skills to the next level.They offer college assistance scholarships for those students who choose to make these non-traditional sports their main sport of choice at the junior, high school level and beyond.Their hope is that through the GTGA Foundation, they can stimulate growth and support of students and families through our training and college assistance scholarship programs and also give back to the community.Over the last five years, since its inception, the GTGA Foundation has been able toprovide thousands of dollars in training and college assistance scholarships. Theyhave contributed time and efforts to many other local non-profit organizations.They believe that good thought does lead to good action and as a foundation hopesto grow, encourage and support those who wish to continue down their chosenpath.What’s Your Good Thought?For more information or how to donate to both our scholarship programs and othercommunity services, visit www.gtgafoundation.org or email [email protected] E-Headlines Google+ Twitter 0 Share. Facebook Email LinkedIn on October 18, 2016
Artificial intelligence can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences. Citation: People don’t trust AI—here’s how we can change that (2018, January 10) retrieved 18 July 2019 from https://phys.org/news/2018-01-people-dont-aihere.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further This article was originally published on The Conversation. Read the original article. Q&A about Watson, the iHuman supercomputer The doctor will see you now. Credit: Ociacia/ Shutterstock Provided by The Conversation Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to understand. And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control. Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes wrong: a Google algorithm that classifies people of colour as gorillas; a Microsoft chatbot that decides to become a white supremacist in less than a day; a Tesla car operating in autopilot mode that resulted in a fatal accident. These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.A new AI divide in society?Feelings about AI also run deep. My colleagues and I recently ran an experiment where we asked people from a range of backgrounds to watch various sci-fi films about AI and then asked them questions about automation in everyday life. We found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as confirmation bias. As AI is reported and represented more and more in the media, it could contribute to a deeply divided society, split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.Three ways out of the AI trust crisisFortunately we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our study. Similar evidence also suggests the more you use other technologies such as the internet, the more you trust them.Another solution may be to open the “black-box” of machine learning algorithms and be more transparent about how they work. Companies such as Google, Airbnb and Twitter already release transparency reports about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of algorithmic decisions are made. Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.We don’t need to understand the intricate inner workings of AI systems, but if people are given at least a bit of information about and control over how they are implemented, they will be more open to accepting AI into their lives. Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.Should you trust Dr. Robot?IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise. Credit: Shutterstock As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. It makes decisions using a complex system of analysis to identify potentially hidden patterns and weak signals from large amounts of data.
The British Army said on Wednesday it is investigating after a video emerged showing soldiers on a shooting range firing at a picture of opposition Labour leader Jeremy Corbyn. London best pest control The 26 seconds of footage shared on social media, reportedly recorded in Afghanistan’s capital Kabul, shows four army personnel shooting at an image of the veteran left-wing leader.“We are aware of a video circulating on social media,” an army spokesperson said.“This behaviour is totally unacceptable and falls well below the high standards the Army expects.“A full investigation has been launched.”A Labour Party spokesperson said: “This behaviour is alarming and unacceptable. We have confidence in the Ministry of Defence to investigate and act on this incident”.Britain’s Press Association said it understood the soldiers had fired a non-lethal hardened wax substance instead of metal bullets at the Corbyn image, which is pockmarked in the footage.It said the incident is believed to have taken place in the past few days and involved soldiers from the army’s 3rd Battalion Parachute Regiment.The range is on a compound in Kabul where personnel practice “guardian angel” drills protecting VIPs, according to Sky News.Images of celebrities feature at the site, but for use as targets to be protected rather than shot at, it added.“If authentic this is unacceptable,” junior defense minister Tobias Ellwood said on Twitter, vowing that he was “looking into it”.Corbyn, 69, a leftist stalwart, is a reviled figure among many British Conservatives and right-wingers who label him a Marxist.A low-level fixture of British politics for four decades, he unexpectedly took control of the Labour Party in 2015 on a proudly socialist programme.A proclaimed pacifist, Corbyn has promised to make “conflict resolution and human rights” central to Britain’s foreign policy if he took power, and to be “guided by the values of peace, universal rights and international law.”
Eve–named after Sir Richard’s Mom. Credit: Scaled Composites LLC The SpaceShipTwo has a capacity to carry six space tourists and two pilots into suborbital space at speeds up to 2500 mph and soar about 65-miles above the Earth. The expected ticket price is $200,000 per passenger and currently there are 300 space tourists on the waiting list. Testing on SpaceShipTwo will begin later this year. Scaled Composites is located in Mojave, California. Paul Allen provided major funding for the SpaceShipTwo design that went a long way to garner the $10 million dollar Ansari X Prize. The Virgin Galactic team is fired up and ready for GO.Sources:Scaled Composites LLC, www.scaled.comVirgin Galactic, www.virgingalactic.com© 2009 PhysOrg.com Virgin Galactic owned by Sir Richard Branson completed a successful test on May 28, 2009 of its hybrid nitrous oxide motor designed by Scaled Composites and a subcontractor Sierra Nevada Corporation. The innovative hybrid motor is the largest of its kind in the world and offers safety features including a kill switch allowing the spaceship to glide back to Earth and perform a conventional runway touch down. Citation: Sir Richard Branson All Fired Up With Latest Rocket Motor Test (2009, May 31) retrieved 18 August 2019 from https://phys.org/news/2009-05-sir-richard-branson-latest-rocket.html Explore further The Virgin Galactic model dubbed, SpaceShipTwo is being built by aerospace expert, Burt Rutan owner of Scaled Composites LLC. As one might expect, Rutan and Branson have come up with a highly efficient and extraordinary design for their space tourism spacecraft. SpaceShipTwo will launch after reaching the upper atmosphere after detaching from the mother ship called Eve. The hybrid motor uses nitrous oxide and according to Sir Richard does not contain harmful toxins as solid rockets used by the space shuttle. Another advantage of the upper atmosphere launch is the cost savings for fuel. Virgin to Become the World’s First Commercial Space Tourism Operator This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.