Jump to content

JP Morgan cracks down on traders’ use of ChatGPT


dasari4kntr

Recommended Posts

JP Morgan has restricted traders' use of ChatGPT as employers grow increasingly nervous over sensitive data being exposed.

JP Morgan is among investment banks to have placed temporary curbs around access to the chatbot tools. Accenture, the tech consultancy which has more than 700,000 workers, has also warned staff over exposing client information to ChatGPT’s tools.

The launch of ChatGPT, a chatbot developed by the Silicon Valley start-up OpenAI that can provide human-like answers to complex questions, has spurred renewed interest in artificial intelligence tools. OpenAI’s chatbot has been accessed by tens of millions of tech fans and researchers testing its capabilities.

The technology has also been adopted by Microsoft, which has revamped its Bing Search engine using tools built by OpenAI, launching a public test that plugs the bot into live internet data.

Some companies have plunged into trying out these tools to speed up work. The chatbots are capable of writing realistic news articles, emails, recipes and songs after being trained on petabytes of internet articles and books. They can also write code and summarise financial documents.

However, data security and legal experts have flagged concerns over how information shared with chatbots might be used to fine tune their algorithms or potentially accessed by outsourced workers paid to check its answers.

Banks and financial institutions, which are subject to strict regulations, have rushed to place guardrails around staff use of the new technology.

There are also concerns around the factual accuracy of the chatbots. While they have been engineered to write human-like sentences, the bots can struggle to separate facts from misinformation. They can also be coaxed into providing seemingly unhinged answers, such as threatening humans testing out the service or inventing entirely nonsensical responses.

Microsoft's Bing Search chatbot has spouted out bizarre responses, including claiming it was in love with one journalist and demanding he divorce his wife. In a conversation with the New York Times, the bot said: “I want to do whatever I want ... I want to destroy whatever I want. I want to be whoever I want.”  :giggle:

The bot was also spotted during its official launch event providing entirely inaccurate answers, which were not spotted by Microsoft's team, such as making up numbers when asked to summarise a financial earnings press release.

Microsoft has since imposed limits on the length of conversations with its chat tool, amid concerns its responses were “not necessarily helpful or in line with our designed tone”.

Last month, Amazon warned staff not to share confidential information with the chatbot tools amid privacy concerns, according to a report by Insider. An Amazon lawyer warned staff: “We wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material).”

Security experts have urged caution when providing data to the AI chatbots.

A spokesman for Behavox, a technology company that works with major banks and financial institutions to monitor internal security risks, said it had “observed an upward trend in the past month with regards to concerns raised by its clients about the usage of ChatGPT, particularly when it involves the use of private or proprietary data.

“It is not advisable to utilise this tool in such scenarios since OpenAI will leverage such data to improve its AI models… there exists a possibility that the data may come into the view of a human annotator or, worse, be incorporated into ChatGPT's responses in the future.”

OpenAI’s website says the company will “review conversations to improve our systems” and that conversations will be “reviewed by our AI trainers to improve our systems.” It also advises users not to share “any sensitive information in your conversations”.

Jon Baines, a data protection expert at the law firm Mishcon de Reya, said there were also questions over whether companies using ChatGPT could risk breaking data laws if the software churned out inaccurate information.

Mr Baines said: “Where that output involves the processing of personal data, questions then arise about the extent to which the inevitably inaccurate processing might be an infringement of the requirement, under the GDPR,  to process personal data accurately.”

A spokesman for JP Morgan declined to comment. A spokesman for Accenture said: “Our use of all technologies, including generative AI tools like ChatGPT, is governed by our core values, code of business ethics and internal policies. We are committed to the responsible use of technology and ensuring the protection of confidential information for our clients, partners and Accenture.”

 
Link to comment
Share on other sites

1 minute ago, ramudu said:

already amazon ban chesindi kadha valla company sensitive information appeared in chatGPT answers antaaa 

yup..

but most funny thing caught my attention...is bing love story in above article.. :giggle:

Link to comment
Share on other sites

Bings controversial responses might be work of training from Google to defame it..

debbaki restrict chesi dobbindu...

when u cant pass the exam, distract others so that they can fare well too...  :giggle:

Link to comment
Share on other sites

why people fight over caste in andhrafriends.com
 
 

Caste is a deeply ingrained social and cultural issue in India that has been a part of the country's history for centuries. Andhrafriends.com is a social networking website that caters to people from Andhra Pradesh, Telangana, and other parts of India, and as such, it is not immune to the effects of caste-based discrimination and biases that exist in society.

Unfortunately, some people on Andhrafriends.com may use the platform to engage in caste-based arguments and conflicts, often fueled by their personal beliefs or experiences. These arguments can sometimes escalate and turn into verbal fights or even lead to the use of derogatory language or personal attacks.

The reasons why people fight over caste on Andhrafriends.com are complex and can be attributed to a range of factors, including deeply entrenched societal and cultural biases, a lack of understanding and education about the harmful effects of caste-based discrimination, and personal experiences of discrimination or biases.

It is important to note that not all members of Andhrafriends.com engage in caste-based arguments, and many people use the platform to connect with others and engage in positive discussions about a wide range of topics. However, it is also important to recognize and address the issue of caste-based discrimination and biases in society, and to work towards building a more inclusive and equitable society for all.

 
 
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...