I think one appealing way to think about this is in terms prior and likelihood for sentiment. Naive Bayes is a likelihood model (how probable am I to see this exact tweet, given that it's positive?). You're asking about the prior probability of a the next tweet being positive, given that you've observed a certain sequence of sentiments so far. There are a few ways you could do this:
- The most naive way is the fraction of tweets the user has uttered which are positive is the probability that the next one will be positive
- However, this ignores recency. You could come up with a transition-based model: from each possible previous state, there's a probability of the next tweet being positive, negative or neutral. Thus you have a 3x3 transition matrix, and the conditional probability of the next tweet being positive given the last one was positive is the transition probability pos->pos. This can be estimated from counts, and is a Markovian process (previous state is all that matters, basically).
- You can get more and more complex with these transition models, for example the current 'state' could be the sentiments of the last two, or indeed last-n, tweets, meaning you get more specific predictions at the expense of more and more parameters in the model. You can overcome this with smoothing schemes, parameter tying etc. etc.
As a final point, I think @Anony-Mousse's point about the prior being weak evidence is going to be true: really, whatever your prior tells you, I think this is going to be dominated by the likelihood function (what's actually in the tweet in question). If you get to see the tweet as well, consider a CRF as @Neil McGuigan suggests.
k-1class label (what class was the immediately previous tweet), thek-2class label,..., and see if that is enough data to come up with a valid prediction. (My personal guess is that it is not enough, but we don't know unless you try.) Basically what you're doing is time series analysis. - Wesley Baugh