You mean there you have it.......your system,your ears not to mention your mind.YMMV,Bob
36 responses Add your response
audiodharma cable cooker is what i use to burn in my cables.i really cannot understand people who spend any money on cables,yet has never ever cook in their cables on the cooker .the difference is just too obvious.i reburn all my cables every 3-4 months.
as far as system burn in,i use gryphon the exorcist after every 5-6 listening sessions,and lately,, isotek burn in cd for new electronics
I think the only reason is that people lack knowledge as to what burn-in is. Nobody has yet been able to describe what this phenomenon is.
If anyone knows, let's hear it!
I believe that if this phenomenon were understood, it could be used as yet another design element in high end audio gear and this would allow us to achieve even higher levels of quality.
I agree with Fafafion, though I use a Hagerman FryKleaner Pro. You'll simply never hear what your system can do unless you cook its cables with one of these things. Don't think you can accomplish the same thing by simply playing music -- especially phono ICs and tonearm wires that never experience a high level signal. When I bought it, I thought it was something of an extravagance, but it's been HEAVILY used by me and friends, all of whom have become disciples. Dave
Remind me never to buy cables from some of you guys. ;-)
I have yet to find a cable that has been cooked that I did not prefer the sound of the same cable uncooked. IME with cookers, the highs take a hit. But, to each his own.
My current thinking on cable break-in is that, much like caps, it is the dielectrics and insulators that respond over time. Cables that use teflon dielectrics take longer to settle, cables that use mostly air as a dielectric don't seem to change as much.
I couldn't agree more...
I don't know about you, but personally, I Fryclean my neural implants every ninety days or so depending on how much my settings are on "listen" vs "send."
Overal aural sensitivity has a gain of +/- 3db depending upon whether I'm wearing my Panama or...speaking of caps... my Giants cap...
Net-net in other areas a Fryclean of my neural implants and networks routinely decreases my tennis doubles reaction time, and as for conversational wittiness and general repartee, well, that goes without saying...
:) happy listening... er sending?
btw, cable burn-in is essential and my Frycleaner is a big help in the process.
I concur with what you've experienced Johnhelenjake. I just put my system together with speakers that had been shipped but were broken in and an amp that had been shipped for service, cables laying unconnected and dormant. No system up and running for over a month. Since I put the system together it has taken 10 days for the following:
1, the thinness and leanness to go away
2. the midbass and weight in the low bass to fill in (heft)
3. centerfill to come in to play
Johnhelenjake, Usblues implies that this could all simply be in "your mind," a sort of "psychoacoustic phenomenon." I'm here to testify that in all my systems I have witnessed "break in" and "settling-in" of cables, components and systems. One of the key areas that is palpable and cannot be dismissed as psychological or "one's ears breaking in" is in the bass and the way it can strengthen in your room. When I listen to the same discs that were previously anemic and lean at any volume become full and weighty with bass 10 days later, then I know the change I hear is very real. Going from weak bass to strong bass, and from lean and anemic sound to full and weighty is definitely missed when it's not there and easy to hear and feel when it is, especially when you know the discs and music so well. Whether the equipment or cables are settling in after moving things around or break-in/ burn-in with new gear, changes are occurring in the system that affect the sound coming out of your speakers.
No, Dave, you guys just have different systems, goals, preferences, etc. I have tried side-by-side comparisons with the two of the same cable, one cooked, one not. The uncooked always sounded better to me once they had settled in on their own. I do think that even cooked cables settle in after a while, perhaps adjusting to the difference in voltage/current between the cooker and system components.
For what it's worth, I am told by my high-frequency specialist friends and colleagues that it was well known to the scientists working in high-frequency laboratories that new cables "settled in" after a while. I am unaware of any "cable cookers" used in the "serious" radio industry. It seems, though, that the cables had to be used in the very application in which they were settled in. In other words, you don't get a high frequency cable to work well if you let it settle in as a power cable for a while. Has to be the same high frequency application.
I don't know if this stuff was ever published, but it was known and talked about according to people who used to work in the field. I don't know if it was measurable, though. I'd have to ask about this -- highly INTERESTING!
My present theory is that it is but simple degaussing that is going on -- nothing else. You take magnetic domains and keep making them smaller and smaller. That's what degaussing is all about. It is done by taking a signal and making a "fade-out" out of it. Similar to what the procedure used to be with the old CRT monitors, when you pressed the DEGAUSS function.
Now, when you look at a music signal (or any sound signal, for that matter), it is all a bunch of fade-outs. That's what echo and reverberation and all the tails of all the percussive sounds are.
I have taken this theory and practiced with it over the years.
The result was that 100 short fade-outs all the way down to zero of approximately 10 seconds each sounds worse that one very large fade-out going from full power to zero in 100 x 10 seconds.
I have not yet been able to discern any other improvement in quality once the fade-out reached 7 days. In other words, I could not hear a difference between a cable processed with 7 days of non-stop single fade-out compared to another equally new cable processed (faded out on it) for 10 days non-stop.
Who has had any other tangible results and methods?
Yes, especially power cables!
Today I tested the newest theory. Without a shadow of a doubt, a power cable played in a system for two months every day doesn't sound anywhere near as beautiful as the same cable faded out on over 7 days. So, settling in is different from burn in, if it exists.
It could also be that, as I said earlier, music is a bunch of small fade-outs, and so it could be that some small amount of degaussing does occur with brand new cables, whereby completely degaussed cables (7 days of one continuous fade-out procedure) positively dwarf this minute change of small fade-outs which constitute the music signal.
I think that if this is so, burning in and settling in must not be considered two names for the same concept. They are different things altogether. The burning-in happening once and for all time, and the settling in needing to happen every time you turn off your system for a few days or weeks.
DANGEROUS THOUGHT: There is a logical and perhaps scary conclusion to be drawn from this. This means that you can possibly WORSEN the sound of a cable purposely by applying a very long fade-in with abrupt ending. This is similar to playing music backwards. If this is possible, it becomes even harder to believe in cable comparisons, since the candidates might possibly be tampered with purposely by signals which knowingly alter the sound in BOTH directions. Good or cable "A" and bad for cable "B". Then a blind test in front of the unsuspecting public... all of whom will choose cable "A".
Louis you lost me on this fade out thing. For me bottom line is that when I have a system that is taken down turned off, cables and equipment moved, then once the system is re-assembled it takes time for the system sound to come together and play to its potential. That just happened here. If I add new gear, break-in / burn-in (call it what you want) is needed. With equipment that does not need break in then the equipment and cables need to "settle in" before the sound comes together. For me I look at settling-in in the shorter term of time, break-in depending on the equipment involved can take hundreds of hours. The accompanying change in sound is very real and I use bass and its ability to go from lean and thin to pressurizing a room as a palpable example of one of the possible affects of settling in, break-in or burn-in. In my post above, I use bass pressure in a room as a "palpable" example. Bass pressurizing a room gets outside the realm of the more subjective perceptions audiophiles talk about like "imaging," and "soundstage depth."
Louis you lost me on this fade out thing. For me bottom line is that when I have a system that is taken down turned off, cables and equipment moved, then once the system is re-assembled it takes time for the system sound to come together and play to its potential.
This is "settling in." This is not burn-in.
For me I look at settling-in in the shorter term of time, break-in depending on the equipment involved can take hundreds of hours. The accompanying change in sound is very real[...]
I agree. Settling-in is something that happens in about three days.
If you compare "settling in" to what people call "burning in," I believe it is best to understand it in terms of numbers.
On a scale of 1 to 100 in terms of sound quality attained, let's say settling in (the thing that happens in about three days) is 1.
On this scale, burning-in with a specially engineered signal to accomplish this is 100.
In other words, "burning in" is something that can revolutionize and alter the sound substantially, whereas "settling in" is just something that adds a little homogeneity to the sound.
Burning in can trick you into thinking it is a different component.
Settling in can't.
I'm currently running new exciting experiments with this phenomenon. Will publish results when they're ripe.
Highly interesting! I would say revolutionary.
Louis, I never said this was "burn-in." In my post I differentiated between settling-in and break-in / burn-in. I know the difference.
Any interconnect or cable will have as a component of it's construction, a dielectric. Cables/wires will also have a measurable capacitance, for a given length(they act, to a degree, as a capacitor). Other than a vacuum(the best dielectric) and air, all dielectrics will store a certain amount of energy and release it at various rates and time constants, that are predictable to a point(http://www.designers-guide.org/Modeling/da.pdf). The better cable designers take this into account, and voice their cables accordingly(the "Q" of a cable can be predicted). Once a cable or capacitor's dielectric has "charged"(so to speak), it stabilizes and reaches the target sound of the designer. That's a big PART of why the better equipment and cable builders inform the customer that their gear will take such and such a length of time to sound their best. The better dielectrics(ie: polypropylene, polystyrene, Teflon) absorb and release less energy, and do it more slowly- thus, they "sound" better when used in caps and as insulators. They also take more time to burn-in. In a highly resolving sound system, anything that's released into the audio signal, outside of the intended musical content, will be noticed as distortion, noise, a frequency or time aberration, by anyone with ears. Then there are those that don't want to, or can't hear...........
Thanks for your thoughts about capacitance. When you say that the dielectric material "charges" (so to speak) over time, it seems to me as though the cable should sound worse over time. And the more charged, the worse it should sound.
The reason I say this is because air dielectric sounds best. And air is less chargeable than the worst sounding of the dielectrics, namely PVC. Electrostatic build-up is a very bad thing for audio, which is alternating current. Hence the many products on the market to avoid this.
So I don't agree with the "charging up" theory for dielectrics as the reason for cable burn-in resulting in a more natural sound.
I presently believe that the burn-in process could be due to something we can learn from what we know about magnetic domains.
When magnetic domains are large, or else there are several of them charged magnetically in a similar direction, the result is magnetism on a larger scale. Magnetism is a flux, a movement and direction of force.
Indeed, scale is all there is to magnetism. The stronger the magnet, the more uniform the domain orientation.
It could be that an amount of a different type of magnetic domains are present in wire such as copper or silver, which do not display ferromagnetism, as iron does. Perhaps "magnetic domains" is not the proper term, since there is no memory to speak of. Hence, no lasting magnetism. The domains might be present but not lasting, as they are in magnetized iron.
I postulate that during instantaneous applications, such as that of an electromagnetic alternating current running in real-time, perhaps these types of "magnetic domains" I am imagining are indeed active and influence instantaneously the electromagnetic signal in some small way.
So, getting rid of these gets rid of the memory of the cable's metal. Even if it is copper or silver.
A speculative analogy, crossing the fields of magnetism and psychology:
Magnetic memory in iron would be like the recollection of something fixed in your own mind. You can draw this thought up at any time (= detect the magnetism at any time). It "stays put".
Memory in copper or silver would be like associative thought in your mind, brought forth by the similarity of one thought to another. Thoughts in this state of flux are intertwined and depend on the preceding thought. It must "move" to be awakened.
So, if my theory is correct, if we can get rid of the "associative" type of memory in copper or silver, we've achieved the ultimate burn-in we can possibly achieve. The signals should pass without awaking associatively operating domains in the non-ferrous metal of the wire.
The reason I think this is true is that even cables with 99,999999 (add as many nines as you please...) pure silver with air dielectric still burns in.
One thing you need to do is maintain perspective here. The changes in cables being discussed are extremely small - much much less than 0.1%. You need to consider that these effects are likely so small as to be inaudible compared to other differences (your head position, the volume level, speaker driver compliance, and your hearing from one session to the next)
Lessloss- You apparently didn't read anything I posted. A dielectric will only charge to a degree, then it stops. That can be predicted, and therefore a cable or capacitor can be "voiced". I also stated that it's only part of the issue of burn in. You're not disagreeing with me, but with science. Perhaps you can grasp this much shorter treatise on the subject: (http://www.empiricalaudio.com/computer-audio/technical-papers/dielectric-absorption-dissipation-factor-and-q)
I just started my system again after being down for a few months. It has taken about 40hrs of play time before it has started to sound good again. I have a cd that I always play to hear the effect, which I am very familiar with. So it is kind of scientific, and not just arbitrary. So there you have it...
If you listened to music on other systems (radio, car, iPod) then this might be partly acclimatization.
If the room has changed or the position of equipment/istening position has changed then you may be acclimatizing to the new presentation (emphasis is different when equipment position/room changes and this makes your CD sound different even if your gear has not changed - this change can be HUGE and of the order of several percent at specific frequencies and sometimes even much more - changing slightly the timbre of some sounds and instruments and until you get used to this new emphasis and re-adjust your sonic memory then you will notice this)
When trying to detect small effects of less than 0.1 % or much much lower it is actually not "kind of scientific" to trust your judgement and sonic memory of a particular CD. Hearing is good but nothing like as resolving as a measurement made through precision instruments. For example it is extremely difficult to hear the difference between 0.1 % distortion and 1% distortion when listening to music -even though the difference is TEN TIMES.
Another factor that can make a real difference that will be audible is your capacitors in your equipment. If you have had your equipment in storage for many many months then some of the capacitors (depending on the design) may need reforming or may have broken down when you first powered up. This can make a difference that would in certain cases be large enough to be audible. Another factor could be stiffening of the conpliance of your speaker drivers after months of no use.
I am simply saying that you should look at all possibilties before assuming it is related to cable burn in. (Wires are the least likely item in the entire human hearing/room/system equation to cause an audible difference that you attribute to burn in)
A dielectric will only charge to a degree, then it stops. That can be predicted, and therefore a cable or capacitor can be "voiced".
Yes, but let us differentiate whether we are here discussing cable "burn-in" or cable "settling-in"? In this thread it has already been differentiated between the two phenomena. Now, I would ask that you describe whether the voltage retention you are alluding to should result in the recurring cable "settling-in" phenomenon (occurs every time a cable is unused for a time and is reintroduced to the system), or else to the "burn-in" phenomenon, which seems to need to take place only once. It has also been said here that the "burn-in" procedure, done with a device made solely for this purpose, results in a much more profound and obvious betterment of the sound, than does simple "settling-in" in the system playing audio over time.
Or, might it be that voltage retention is altered slightly by "settling-in" and more profoundly by special "burn-in" signals, and that's all there is to it?
But the reason I feel it is important to differentiate between "burn-in" and "settling-in" is because settling-in seldom fools someone into believing that a cable or component swap has been made, whereas specially burning-in the cables makes a world of difference to the point where, really, the sound because so much more liquid, organic, smooth, etc, that it is difficult to believe it is the same cable in use.
That is the reason I believe "settling-in" is likely due to altered voltage retention, but cable "burn-in" is another beast altogether.
Don't get me wrong, I did understand your comment. I simply wanted to see how far we can go with the logic of the known science, speculate and discuss it further, if possible.
Shadorne wrote: Another factor could be stiffening of the compliance of your speaker drivers after months of no use ... If you listened to music on other systems ... If the room has changed or the position of equipment/istening position has changed ... Another factor that can make a real difference that will be audible is your capacitors in your equipment ...
To which I would add: Temperature changes (temperature being a parameter which is fundamental to semiconductor operation in innumerable ways, as well as one which affects most other electronic devices to some degree or another); line voltage changes; changes in the electrical noise environment (both airborne and through the power lines); on-going aging and/or burn-in of other system components; the cleaning and de-oxidization effects on connectors resulting from removing and replacing cables; for vinyl sources, the physical effects on the records of repeated re-playings, and loosening up of the cantilever suspension material in the phono cartridge; etc., etc.
The basic point being that even if our sonic perceptions are 100% accurate (which they certainly are not at least some of the time, when subtle differences are being assessed), it is very easy to attribute the difference to the wrong variable.
My own experiences have led me to believe, btw, that "stiffening of the compliance of your speaker drivers after months of no use" could very well be the most significant of these factors, if the system has not been used for a considerable period of time.
Obviously, given the dynamics of dielectric absorption; if a cable or capacitor is unused for a length of time, the energy absorbed by the dielectric will fully discharge. When put back into service, the dielectric will have to reach it's target(optimum designed/voiced) level once again. Engage in whatever semantic gymnastics you choose from there. As long as my system's cables or electronics(new or reinserted) sound like music at the end of the 200hrs of continuous signal that I generally feed them; I'm happy as the proverbial bumblebee in a clover patch. Happy listening!!
My computer was down for some time. I just got to read all the comments and respond.
I guess what I was referring to would be described as (settling in). I do not know, scientifically, what is going on. I only know that after several days of being hooked up, and playing, there is quit a quality improvement in the music. It is very difficult to discribe, or even discern, all of what is changing to make it sound better. But it is the equipment, and not my ears/brain or other human conditioning that is changing.
Thanks for all the info, it is very interesting.
This is one of the best threads I've read regarding cable break-in/settle-in. Many threads about this subject have been started and in my opinion have gone down the drain usually with the battle of the believers against the naysayers. I'm a believer of the break-in/settle-in of cables because my ears hear the differences. I was one of those who bought an expensive new set of interconnects several years ago( before I understood this burn-in process) and unfortunately sold it prematurely because it didn't sound "right" to my ears. After reading awhile so many posts that cable break-in is real, I bought that same set of interconnects, and this time allowed the 150 hours or so that was required for the break-in and this time I actually heard the changes( roller-coaster ride) until one day the overall sound just opened up and beautiful music was brought forth. But on the other hand, I have allowed more than enough time on some cables, be it speaker cables, interconnects, and power cords, and some cables just didn't gel with my system, anywhere I tried them. I did re-sell those, but at least I knew that I gave them a fair listen by allowing enough time for the break-in process. Patience is a virtue when auditioning especially new and used, as well, cables. I wonder sometimes when I peruse the ads for used cables, "Did that seller really keep these cables long enough to really hear their potential?" Only because it happened to me.
Rodman99999 wrote: As long as my system's cables or electronics (new or reinserted) sound like music at the end of the 200hrs of continuous signal that I generally feed them...
200 hours of continuous signal, on a cable cooker is 8.3 days. The Audiodharma Cable Cooker recommends 3 - 4.5 days to burn-in cables. Isn't eight days on a cable cooker too long?