Training with a 2GB GPU

Then make it work. You are like the guy who comes to an island, does not see a hotel with a shower and wifi there and leaves instead of thinking: “What could I do to build a hotel with a shower and wifi here?”…

Why would he have any interest in fixing your problem for free?

You’ve also mistaken him in your analogy for yourself.

Because it is not my problem, but everybody’s problem. If everybody could use cpu and gpu plus their memory together, we might not be dependent on colab and accomplish much more in shorter time.

We could use a cuda gpu + its memory + cpu + memory + swap file + motherboard gpu(!) + its memory.

At. The. Same. Time.

Now we are using either gpu or, if that doesn’t work, the much slower cpu.

A lot of processing power is not being used because of that.

My analogy can not be applied to me, because I am looking for solutions that are not here, he is just destroying the immature idea that is there without offering anything himself.

Besides, I offered the source code and asked important questions to solve this - what are you offering to solve the problem?

Here’s your actual solution: get access to a better (8gb+) gpu and save everyone a huge amount of trouble. You’re on the internet, you have access, now you just need to choose to make use of it. Trying to hamstring yourself even more like you’re doing above will definitely serve my amusement and your lack of progress, so if that’s what you’re after keep it up.

3 Likes

You are being unreasonable here. Leave this thread together with dkreutz to not deter people who are willing to build something.

You asked for a solution. You got one.

You choosing not to make progress isn’t dkreutz’s or my issue. Your perception of what’s going on here seems to be rather skewed, in fact. People keep trying to help you, and you keep being belligerent when you’re not getting an answer you want. It’s certainly a good way to not get help, not make progress, and alienate anyone who could do something for you. I will certainly enjoy and continue to comment on your floundering until you choose to help yourself out, of course. :slight_smile:

Do let us know when you start to try fixing your own problems. :smiley:

1 Like

As i see myself within the amount of “everybody” i honestly have to disagree. I don’t have the problem, so “everybody” is proven wrong.

But like @baconator already said. Please think on investing in a better gpu. In my opinion it’s a much better and reliable option than putting all gpu + cpu + memory + swapfile + floppy disk + … resources together.

2 Likes

I am not convinced that this will be worthwhile for training scenarios as the CPU contribution will be minimal and it would likely add substantial complexity.

4 Likes

Floppy disk?

Summary of both + dkreutz:
I have no idea how and if that works, but I think it will not be well performing based on my lack of experience with it and therefore I need to inform everybody here that I feel that way. And I will libel every approach that tries to come closer to it based on that prejudice.

Which makes me wonder: what if the creators of tacotron had thought that way?

Presumably everyone has a lack of experience with things that have never been done before!

I’m not saying no one should try it, I’m merely of the opinion that it’s not worth it - we can already determine that the performance contribution to training would be minimal because of what we observe with training solely on the CPU.

I couldn’t definitively comment on the complexity but it seems a fair assumption that it would add to it.

If you find evidence to suggest it will make a larger contribution to performance and you find evidence that it can be implemented simply then I suggest you make a case that it should be looked at. So far I’m not seeing anything that stands up to scrutiny.

1 Like

The best proof won’t make you get your act together.

That’s not right - then I’d gladly look at it for you, but you would “need to compensate me and we need to have a business relationship”

1 Like

Nothing new here.

Besides, you are using my statement out of context in a libelous way.

I get the impression, that nobody here is aware of the inner workings of tacotron and everyone is mainly a user, ie the blind leading the blind.

Ole, you use your words with a clear lack of understanding of their meaning. I was amused to see that you threw in “ad hominem” in the other thread, again clearly not knowing how it is applied because the case you highlighted was not an ad hominem.

Just as I wrote to baconator, you stultify yourself, nmstoker. Don’t make it worse for yourself. Leave the thread.

Best of luck making progress bringing people together with your charm and persuasive ways!

1 Like

Now that is rich… please enlighten us with your deeper understanding on how Tacotron works. At least you should know by now that 2GB RAM will force you to use small training batch size which results in longer training to get results comparable with larger batch size (if results will be comparable at all).

People here have shared their experience with training on CPU - which is awfully slow and therefore impracticable given that a Taco model needs at least 100k+ steps for usable results. Nobody stops you from using CPU but you will be on your own here.

And distributed training with multiple GPU was already attempted here, look into the code. The additional complexity in maintaining the code did not justify the result.

tl;dr: You are asking for things here that other people already tried and abandoned for a reason.

4 Likes

You clearly have no clue about what you’re attempting to do here. On top of that you’ve gone and misidentified the sort of person @nmstoker is, to boot.

You ask for help, then wave it away, you respond with belligerence and complain. A smart person would accept the help and use that knowledge to improve their situation.

You, on the other hand, refuse to do so. Like I said above, when you want to help yourself out, do so and let us know. But you’re just a troll until then.

@Ole_Klett I’d like to remind you of our Community Participation Guideline. Please be mindful of it.

2 Likes