Live Stoner Chat Live Stoner Chat - Jan-Mar '26

Live Stoner Chat
meh, whatever....i could care less anymore anyway....i'm jus ridin it out here til the sidewalkz roll up completely & then i'm done, so.... :shrug: ppp
But we like having you here.
(Where ever we are)

1000013792.gif
 
I also think we should not fear AI. We should, instead, make sure everyone knows AI is just regurgitating what it believes the answer is based on the popularity of the answer in its "language model."

This alone should tell everyone what they need to know. AI is easily "gamed." This has been demonstrated unintentionally for some time now.

People are taking AI results as gospel. To the point that some people have died. This reflects the common misconception that AI can determine the factuality of what it spits out. That alone, for me, is reason enough not to offer it as a source of information in any regard when the source for the language model is uncontrolled.

AI works best where all the data it filters through is factual. When it comes to AFN, it is easier to determine what is factual when you can associate a name with the provided information with the ability to compare and contrast answers and determine the best result. With AI, none of that process is visible and the selection of the correct answer is more a popularity contest than factual determination.

A good use for AI is in radiological diagnosis, and even then only when used to verify the findings of a human radiologist. This holds true for many areas of science where (and this is the important part) it is safe to assume all the data the AI is working with is already factual and accurate.

Sorry if this is rambling. I am sorely in need of sleep.
Training data is where the problems lie, especially when (already?) it starts to train itself on other AI output. Garbage in, garbage out, as the saying goes, only a lot worse than normal.

The dedicated purpose AI tools, like medical imagery trained specifically on expertly verified images are a whole different tool than the large language tools that most people play with. AI has huge beneficial potential when properly trained with medical data, and with the output checked by human experts. A combination of AI analysis of symptoms followed by expert checking is already better than just checking with the experts, at least one research report found that in their tests. This is because AI will pick up oddball medical conditions that an expert will not think of initially, but will be clear after AI directs attention to the real problem.

As I mentioned before, the trouble with AI is that the term includes wildly diverse analytical tools, some valuable and effective, others toys with little real use, and still others potentially useful but actually dangerous. :pighug:
 
Is this discourse or the recent one yih are testing?

It's all Discourse.
So the current Discourse site is hosted through Discourse the company, but they paywall the best features (we would have to spend an additiona $400/month) to get those features.
The one I launched last night is still Discourse, but it's self hosted, where it's 100% unlocked.

The unlocked version is the bees knees, man.
 
Training data is where the problems lie, especially when (already?) it starts to train itself on other AI output. Garbage in, garbage out, as the saying goes, only a lot worse than normal.

The dedicated purpose AI tools, like medical imagery trained specifically on expertly verified images are a whole different tool than the large language tools that most people play with. AI has huge beneficial potential when properly trained with medical data, and with the output checked by human experts. A combination of AI analysis of symptoms followed by expert checking is already better than just checking with the experts, at least one research report found that in their tests. This is because AI will pick up oddball medical conditions that an expert will not think of initially, but will be clear after AI directs attention to the real problem.

As I mentioned before, the trouble with AI is that the term includes wildly diverse analytical tools, some valuable and effective, others toys with little real use, and still others potentially useful but actually dangerous. :pighug:

I just want to point out that most of the AI tools I've seen on Discourse are either to summarize a topic/thread, or to ask a questions on how to use Discourse. I don't think we're going to be misleading growers with either of those features. lol

Years and years ago we were approached about being part of an app called Grow Buddy, where we would have fed it information from AFN to give as advice to growers in it's database. I think it was the Cirrus LED guys that introduced us to them. Regardless, we didn't bite on it (but maybe we should have in hindsight?) Shrug.
 
Is this discourse or the recent one yih are testing?

The self hosted (the server part I'm going to pay to have managed) version should come out to just a bit over what we're currently paying for THIS site.

And it's beefy. And fast. And way more storage than the Discourse hosted Discourse.

lol

I feel like this can get confusing quickly. Maybe I can make a drawing.
 
the dept of conservation changed thingz around 2 or 3 yrz ago, so that they don't migrate right over the top of us anymore, so....:rolleyes1: :shrug: ppp
Do they give them different directions nowadays? :shrug::crying:
 
Training data is where the problems lie, especially when (already?) it starts to train itself on other AI output. Garbage in, garbage out, as the saying goes, only a lot worse than normal.

The dedicated purpose AI tools, like medical imagery trained specifically on expertly verified images are a whole different tool than the large language tools that most people play with. AI has huge beneficial potential when properly trained with medical data, and with the output checked by human experts. A combination of AI analysis of symptoms followed by expert checking is already better than just checking with the experts, at least one research report found that in their tests. This is because AI will pick up oddball medical conditions that an expert will not think of initially, but will be clear after AI directs attention to the real problem.

As I mentioned before, the trouble with AI is that the term includes wildly diverse analytical tools, some valuable and effective, others toys with little real use, and still others potentially useful but actually dangerous. :pighug:
Garbage in garbage out is right. Always has been with computers.

When I use AI, I usually specify types of sources ie: published medical journals, peer reviewed scientific research, university libraries etc.

I try to be as specific as I can when I write queries- sometimes I have to rephrase for it to understand what I'm after.

Sometimes work around it's legal limitations ie instead of "what strain is good for anxiety" to "what strains are commonly used by anxiety patients"

Still, it can contradict "itself" and it's good to call that out and demand the correct response, even tell it to find additional sources to challenge the validity of that source.

Omg 😲
I didn't know I had that much to say...

:baked:
 
:pighug: Well Done you...hope you enjoyed it....:bighug:..how was the reunion now you have a bit space...?...
Was nice to see the fam but also nice to be back home. Was able to bring all my seeds with me this time which is good cause id like to get a small grow going at some point. Our landlords dont live nearby and i dont think they will be coming around anytime soon so I should be in the clear. We have some of the stricter weed laws in the US so I just gotta keep to myself and ill be fine.
 
Back
Top