Tuesday, July 29, 2008

BBC1黄金时间,中文奥运西游记动画广告

女声在唱,“为了希望和荣耀”,大约1分多种,突出鸟巢,以及比较西化的猴哥,八戒,还有认不出来的沙僧

(其实吴承恩的西游记原插图,和这个比较像,毛脸雷公嘴,三兄弟的长相经常把人吓坏,呵呵)

我觉得,没有太明显的诉求。。英国人听不懂,一开始肯定也看不懂(他们知道鸟巢的不会很多吧),直到结束时出现的奥林匹克英文字样

但愿不是哪个领导拍脑袋决定的吧!

我觉得,要么配英文字幕,要么一开始就要有奥林匹克标志。。

“为了希望和荣耀”

希望所有的劳苦大众,应该是被当作公民对待的所有人,真能昂起头来,争取自己的希望和荣耀

路尚远。。有良知的人,一起加油

Labels:

Sunday, July 27, 2008

fwd房事气功之提肛益肾法

多站起来活动,防治痔疮

艾炙足三里,非常有好处
取嚏法
按手内侧,直到肩(中府)求医不如求己2,随手记

房事气功之提肛益肾法
『分享·告诉好友』□ 女性健康 更新时间:1969-12-31

这种提肛运动益肾很简单。首先坐在椅子上,使精神集中,轻闭双目,然后慢慢地在肛门上用力,再使用一口气将它缩紧,有如排尿时中途停止的要领一样。接着立刻将力量放松,使肛门松弛,把肛门再次地收缩、放松,如此反复地作3分钟,就会熟练。

  肛门收缩时,阴茎就会提高,象拉弓似的感觉,每日反复练习,括约肌就会变强,久之勃起可随心所欲。提肛运动不仅能张精,且有长寿不老、永葆青春的效果。
  
  回春式呼吸法:把“腹式呼吸法”、“吸缩呼胀法”与“提肛运动”结合起来的益精法,称为回春式呼吸法,反复练习,可有很大的益肾强精效果。
  
  此法是吸气时使腹部凹下去,同时配合一定的时间,把肛门的括约肌也紧紧地收缩;反之在吐气时,使肛门括约肌松弛。练习此法还要注意,吸气时要慢慢地、深深地使气吸到肛门的部位,深深的吸气,然后再慢慢地放松。



加强调息锻炼 提高练功效果
2007-02-27 15:23:00 华奥星空
-----------------------------------------------------------------------------


袁顺兴

调息是练功者通过主动、自觉地调整和控制呼吸,以改变呼吸的频率、节律、深度等,并使之逐步达到练功要求和目的的一种方法。调息作为健身气功锻炼的“三要素”之一,是练功者达到“三调合一”境界,提高练功效果,增进身体健康的一个重要环节。因此,正确掌握调息方法,积极进行调息锻炼,对于健身气功习练者来说至关重要。健身气功的调息方法很多,常用的主要有以下三种:
一、自然呼吸
不同的功法对调息方法的要求是不同的。有的功法要求习练者采取自然呼吸法,有的功法要求习练者采取腹式呼吸法,还有的功法要求习练者可根据身体姿势的变化或动作的劲力要求,采取相应的呼吸方法。尽管如此,自然呼吸法仍然是健身气功锻炼最常用的一种调息方法。
所谓自然呼吸,它是一种有意识、有控制的正常呼吸,即通常采用的鼻吸鼻呼或鼻吸口呼。自然呼吸的气息比平时的正常呼吸更加柔和,规律性要求也更加明显。鉴于练功者的性别、生理、职业和习惯等不同,自然呼吸还可以分为胸式自然呼吸、腹式自然呼吸和胸腹式混合自然呼吸三种形式:女子大多表现为胸式自然呼吸,即呼吸时胸部随呼吸起伏,吸气时胸部隆起,呼气胸部回缩;而男子大多表现为腹式自然呼吸或胸腹式混合自然呼吸,即呼吸时腹部或胸腹部随呼吸起伏。
进行自然呼吸锻炼,需要注意以下三点:
1、呼吸时用意要轻微。呼吸时只要稍加意念就可以了,否则用意太大,容易产生“风”、“喘”、“气”三相。所谓“风相”是指呼吸比较急促,能够听到自己粗糙的呼吸声。所谓“喘相”是指呼吸时虽然听不到呼吸声,但鼻中似有结滞不通的感觉。所谓“气相”是指呼吸虽然无声,鼻中也不结滞,但呼吸不够细匀。因此,在进行健身气功调息锻炼时,用意要轻微,做到呼吸自然、柔和、流畅,以利于身心放松。
2、呼吸与动作要协调。自然呼吸与动作配合有一定的规律,肢体上升或外开时吸气,肢体下降或内合时呼气,肢体转动上半圈时吸气,肢体转动下半圈时呼气,如果呼吸和动作配合不协调,也会影响练功的效果。
3、呼吸与意念要协调。自然呼吸与意念配合也有一定的规律,吸气时意念要求放在需要注意的部位,同时要想一个“静”字;呼气时意念仍要求放在需要注意的部位,同时要想一个“松”字,如果意念有特殊要求,呼吸也要配合协调。总之,要逐渐做到呼吸和意念、呼吸和动作密切配合,力求达到“三调合一”的境界。
二、腹式呼吸
腹式呼吸是利用膈肌运动进行呼吸的一种方法。它分为顺腹式呼吸和逆腹式呼吸。顺腹式呼吸是指吸气时小腹部慢慢隆起,呼气时小腹部慢慢回缩。逆腹式呼吸是指吸气时小腹部慢慢回缩,呼气时小腹部慢慢隆起。由于逆腹式呼吸的膈肌上下运动幅度比顺腹式呼吸更大些,所以逆腹式呼吸对胸腹腔内脏的按摩力量比顺腹式呼吸更大些。需要指出的是,腹式呼吸与自然腹式呼吸是不同的。自然腹式呼吸是整个腹部随着呼吸而自然起伏,而腹式呼吸主要是利用膈肌运动进行呼吸,要求呼吸的气息比自然呼吸更细、更匀、更长。
进行腹式呼吸锻炼时应当注意以下三点:
1、切忌勉强用力。进行腹式呼吸锻炼,应在进行自然呼吸锻炼的基础上,逐渐加以意识引导,逐步做到吸气时轻轻地用意念使小腹部肌肉放松,同时小腹自然慢慢隆起。呼气时轻轻用意念使小腹部肌肉收缩,小腹部自然慢慢回缩。这样反复锻炼,自然呼吸就会逐渐过渡到顺腹式呼吸。同样在熟练掌握顺腹式呼吸方法的基础上,逐渐加以意识引导,吸气时轻轻地用意念使小腹部肌肉收缩,小腹部自然慢慢回缩。呼气时轻轻地用意念使小腹部肌肉放松,小腹部自然慢慢隆起。这样反复锻炼,顺腹式呼吸就会逐渐过渡到逆腹式呼吸。需要注意的是,在没有经过自然呼吸或顺腹式呼吸锻炼,就直接进行逆腹式呼吸锻炼,容易导致憋气等不良反应。
2、切忌故意挺腹和缩腹。在进行顺腹式呼吸或逆腹式呼吸锻炼时,不要故意将肚子挺出来或者缩进来,这样容易损伤腹肌和腹腔内脏。腹式呼吸锻炼时小腹部的隆起和回缩主要是依靠气息吐纳自然形成的。当纳气较多时,小腹部就会自然慢慢隆起,而随着小腹壁回缩的压力,气息又会自然慢慢吐出。
3、可进行意守丹田训练。传统医学认为,丹田是人体精、气、神“三宝”最集中的地方,有上、中、下三个丹田的区别,这里所指的丹田是指下丹田,其具体位置和大小范围,长期以来多有争议,有的认为在脐中的神厥,有的认为在脐下一寸半的气海,有的认为在脐下三寸的关元等。笔者认为下丹田不是一个点,也不是一个面,而是处于肚脐以下整个小腹部内的一个体,其中包括神厥、气海和关元等穴。在进行腹式呼吸锻炼时,习练者可适当进行意守丹田的训练,以促使人体精足、气充、神旺,增强人体的生理功能和免疫功能。
三、提肛呼吸和发音呼吸
提肛呼吸和发音呼吸是健身气功锻炼的特殊呼吸方法。目前正在推广普及的四种健身气功,其中一些功法就要求在采取自然呼吸或腹式呼吸的同时,还要以提肛呼吸或发音呼吸相配合。如健身气功·五禽戏中的“猿提”,就要求采用逆腹式呼吸,并要求以提肛呼吸相配合。健身气功·六字诀在采用逆腹式呼吸的同时,要求吐纳时还要以发音呼吸法相配合。而健身气功·易筋经的“三盘落地势”,则要求在呼气时应口吐“嗨”字的发音,以便使体内的真气在胸腹之间升降,达到心肾相交、水火相济的功效。
提肛呼吸作为一种特殊的呼吸方法,强调在吸气时要有意识地使小腹部和肛门肌肉同时收缩,即收腹提肛,在呼气时要有意识地使小腹部和肛门肌肉同时放松。提肛呼吸常用于中气下陷的一些病症,笔者在临床医疗工作中,对尿道综合症、尿失禁、遗尿、脱肛、胃下垂、肾下垂、子宫脱垂的病人,常常要求病人在积极治疗的同时,学练逆腹式呼吸法和提肛呼吸法,以配合治疗,取得了比较好的效果。
发音呼吸则要求习练者在练功时,把意念与舒缓圆活的动作、匀细柔长的吐气发声相结合,寓意于气(呼吸),寓意于形,通过发出不同的字音,来泄出不同内脏的邪气,达到调理内脏的功能。
总之,在加强调息锻炼的过程中,不仅要锻炼呼吸的频率和节奏,而且要锻炼气息的频率和柔度等。锻炼的基本要求是先易后难,循序渐进,经过长期坚持不懈的锻炼,逐渐使 “三调合一”,从而达到强身健体的目的


[轉貼]壯陽強身氣功
一、真氣運行法

  本功法具有健腎補腦作用,督脈貫通後,真氣在任督脈內運行(小周天),頭項百會穴和丹田產生相互吸引的磁性力量,這樣腎中元陰元陽可上達髓海而還精補腦,五臟精華又下歸丹田,補充腎精,兩者相輔相成,互相補充,使人體由衰返壯。

  從生理學觀點看,督脈貫通後,可促使腎上腺和腦垂體間互相激惹、互相滋補、生機旺盛。

  早在戰國初期的《行氣玉佩銘》中已有真氣運動的記載,內中指出:“行氣--深則蓄,蓄則伸,伸則下,下則定,定則固,固則萌,萌則長,長則退,退則無,無幾樁在上,地幾樁在下,順則生,逆則死。”經會陰、尾閭、命門向後竄動,夾脊向上,透玉枕直達腦海,繼而又下行丹田,稱為小周天。茲將練功法介紹如下:

  第一步:呼氣注意心窩部

  方法:取站、坐、臥式,口唇微閉,舌舐上齶,雙目微閉,排除雜念。先注意觀鼻尖片刻,隨即閉目內視心窩部,用耳朵細聽自己的呼氣,使其別發粗聲,在呼氣的同時意念隨呼氣趨向心窩部。久久行之,真氣即在心窩部集中起來。如果仍然夾念紛擾,可用“數息法”。即呼氣默數一,再呼氣默數二,這樣一直數到十數,再從一反復操作,直到雜念不再興起,即可放棄數息法。入靜後,呼氣時真氣要入丹田,不要在呼聲上打擾。 時間:每天早中晚各1次,共3次,每次20分鐘,一般認真操作,10天即可完成第一步的功候。

  反應:練功到3-5天即感到心窩部沉重;5-10天,每次呼氣時都感到有一股熱流注入心窩部,這是真氣集中的表現。

  效果:開始幾天由於不習慣,會感到頭暈,腰背酸痛,呼吸不自然,這是自然現象,不要有顧慮,只要堅持鍛煉就會變得自然。

  第二步:意息相隨丹田趨

  方法:當第一步功夫練到每一呼氣,即感心窩發熱後,就可以意息相隨,呼氣時應延伸下沉的過程,慢慢地一步步自然向小腹(丹田)推進。不可操之過急,如果用力過大可產生不舒服的感覺。 時間:依法每日3次,每次25-35分鐘,10天就可氣沉丹田。

  反應:每次呼聲都感到一股熱流送入丹田,小腹有時作響,腸蠕動增強,矢氣現象增多。這是真氣運到小腹,腸功能發生改變,驅逐邪氣的表現。

  效果:由於真氣已通過胃區,脾胃功能已有改善,真氣沉入丹田後,周圍臟器(如大小腸,膀胱,腎等)都逐步發生生理上的改變,一般都感到食欲增進,大小便異常有改善等。

  第三步:調息凝神守丹田

  方法:當第二步功到丹田有了明顯的感覺時,就可以把呼吸有意無意地止于丹田。不要再過分注意呼氣往下送,以免發熱太過,耗傷陰液。可任呼吸自然,只將意念守在丹田部位。

  時間:每日3次,每次增至半小時以上。這一段是在丹田培養實力階段,所需時間較長,40天左右可感到小腹充實有力。

  反應:基於第二步氣沉丹田,小腹發熱明顯,十數日後小腹內形成氣流,隨著功夫的加深,氣流也越來越大,小腹力量充實。待力量蓄足後,即可向下遊動,此刻練功者有時會感到陰部作癢,會陰跳動,腰部發熱等,以上感覺出現的遲早可因人而異。

  效果:由於任脈通暢,心腎相交,中氣旺盛,因此心神安泰,睡眠安靜。通過練功不斷地給腸胃增加熱能,脾胃消化能力增強,體重增加,精神充沛,元氣充足,腎功能增強,對陽痿有一定療效,對女子月經不調也有一定療效。 第四步:通督勿忘複勿助

  方法:意守丹田40天后,真氣充實到一定程度,有了足夠的力量時,即沿脊柱上行,上行時,意識應伴隨著真氣(勿忘),如果真氣到某處停下來,也不要用意識向上導引(勿助)。這種上行速度的快慢是由丹田之力來決定的。若實力尚不足,它就會停下來不動。待丹田力量充實後,自然繼續上行。若急於通關,努力導引,會和丹田力量脫節,這是非常有害的。因此必須任其自然,真氣的活動情況是不以人的意志為轉移的。如果上行到玉枕關通不過,內視頭項就可以通過了。

  時間:每天可酌情增加坐功次數,每次時間也應延長到40分鐘或一小時左右。至於通關時間則因人而異,有的人刹那間就通過了,這樣通過力量很猛,震動也很大。有人須經數小時或數天才能通過。大多數是在一周左右。

  反應:在第三步的基礎上練功者自覺丹田充實,小腹飽滿,會陰跳動,後腰發熱,命門處真氣活躍(即“腎間動氣”)有一股力量沿脊柱上行。在督脈未通之前,背部常有向上撥的感覺,頭部則有箍緊之感,這是通督前必有的現象。有些人遇到此種情況,常產生懼怕心理,不敢再練,前功盡棄,殊為可惜。這一階段必須堅持練功,一旦督脈通過後自然輕鬆愉快,通督是一個飛躍,故稱為“積氣沖天”。

  效果:督脈通暢後,一呼真氣入丹田,一吸真氣入腦海,一呼一吸形成任督循環。這時才能體會到“呼吸精氣,獨立守神”的真諦。精氣不斷地補益腦髓,大腦皮層的功能增強。凡腎精虧損引起的頭暈耳鳴、失眠健忘、腰酸腿軟等症狀都會逐步消失。

  第五步:元神蓄力育生機

  方法:一般仍意守丹田。通督以後,各個經脈都相繼開通。如頭項百會穴處出現的活動力量,可意守頭項。靈活掌握,所謂“有欲觀竅,無欲觀妙”也就是練功不同階段的處理方法。

  時間:每日3次,每次1小時以上,時間越長效果越好。需要一個月左右時間,各種觸動現象才能逐漸消失,只餘下丹田與上丹田之力顯得更加集中旺盛。

  反應:在通督脈的前後數十天內,全身常似有流竄動、皮膚發癢似蟲爬等感覺,這都是經絡通暢,真氣旺盛的表現。遇到這種情況,既不要刻意追求,也不要驚慌失措,安心坐下就會自然平復。坐到極靜時,以上現象消失,鼻息微微,若存若無,內部真氣則更加集中旺盛。

  效果:真氣越充足,補償和增強身體的代謝機能就越充分,因而使人活力旺盛,抗病免疫力增強,大大減少和排除致病因素,治癒痼疾,堅持鍛煉就可以身心健康,益壽延年。
壯陽強身氣功
  以上五步功夫是循序漸進的,在整個真氣運行過程中,身體會有三種不同變化。第一、二、三步是調整呼吸推動真氣。體內真氣集中于丹田,這個階段古稱為“練精化氣”,這是第一階段;第四步運用丹田積蓄的真氣,沖通督脈,直達腦海,這一段叫做“練氣化神”,這是第二階段;第五步以後,功夫更加純熟,由於經絡暢通無阻,真氣在而運行自如,此為高級階段,稱“練神還虛”,掌握真氣運行的五步、三階段,才能收效顯著。

二、呼吸益腎法

  呼吸益腎法是中國古老的一種回春術,先瘵體內污濁空氣全部排出,然後再把新鮮空氣吸進去,這種呼吸方法稱之為“吐故納新”,也是呼吸益腎法的基礎。

  呼吸法能使人身健康,預防疾病,進而能防止衰老,達到益精回春,旺盛性功能的效果。

  一、腹式呼吸法:兩手放在肚臍下,就可以輕易地察覺出空氣腹內的情況。姿勢可立可坐,慢慢地吸氧,然後在刹那間將它大口吐出。習慣腹式呼吸法之後,再求精神貫一。只要有空,隨時可進行2、3分鐘。此法使腹部肌肉充分的收縮再放鬆,可加速血液循環,消除腹腔,腸系膜的瘀血。如此繼續兩周,身體就會自然感到清爽,食欲大增,肌膚紅潤。

  二、吸縮呼脹法:此種呼吸法是上述腹式呼吸法的逆呼吸法,中國古代仙人做為長壽不老回春術的秘修之法。

  首先坐在椅子上,或取立姿,開始時將肺中污濁空氣排出,然後將肌肉放鬆使全身力量消除,然後再努力吸氣,將腹部用力往裏收縮至最大程度為止。接著把肩部放鬆,一面使腹部脹起來,慢慢將空氣吐出,反復練習2至3次以後,就能簡單地使用此法了。

  此外,要注意吸氣時將舌尖巾於上齒後面,完全用鼻子來吸氣;吐氣時舌頭要附於下頷由口中吐氣。練習此法,要精神貫注,使自己覺得氣流到體內的每個角落。

三、提肛益腎法

  這種提肛運動益腎很簡單。首先坐在椅子上,使精神集中,輕閉雙目,然後慢慢地在肛門上用力,再使用一口氣將它縮緊,有如排尿時中途停止的要領一樣。接著立刻將力量放鬆,使肛門鬆弛,把肛門再次地收縮、放鬆,如此反復地作3分鐘,就會熟練。

  在肛門收縮時,陰莖就會提高,象拉弓似的感覺,每日反復練習,括約肌就會變強,久之勃起可隨心所欲。 提肛運動不僅能張精,且有長壽不老、永葆青春的效果。(本文謹供參考——編者注)

武當天罡掌功法
分类:杂术
2007.5.21 16:57 作者:渤海散人 | 评论:0 | 阅读:0
武當天罡掌功法

天罡掌功法系武當秘不外傳的上乘絕技,五、六年工夫可成就陰陽相兼之勁,舉手便可裂金碎石,三米外可制人,還可達到內氣外放,外氣內收。

該功法分六陽式、三陰式,九式練習。

預備功:每日寅時(早3~5點)起床,選人靜氣新處,活動筋骨後,面南背北盤膝坐下。口目微閉,舌抵上齶,脊椎正直。

百川歸海;左手在內,右手疊按于左手背,左手勞宮穴貼在下丹田部位,意守丹田3分鐘,再採用順呼吸法(長勾細),雙掌按順時針方向揉摩丹田九十九圈,速度均勻。意想把周身之氣都揉人丹田成團(此式將周身散亂先天之氣收集丹田)。

絕濁風清:接上式,兩臂向前伸直,雙手成自然掌,掌心向前,用順呼吸法(長勾細)。吸氣,意想天地精華之氣從頭頂百會穴,雙手勞宮穴吸進集送丹田,同時兩臂縮回胸前,提肛。呼氣,意想體內濁氣由雙足湧泉穴排出,同時雙臂向前平推至直,松肛。如此呼吸反復練習十五分鐘(此式來集天地精華,排除人體濁氣)。

勞宮開合:接上式,站赴,兩足距同肩寬。兩肘屈曲,兩掌相對如抱球狀,意守勞宮穴五分鐘,自然呼吸。接著兩掌對擠至相距8釐米(注意體會有氣對頂惑);再拉開至相距一肩寬處(注意體會有氣牽拉惑)。重複九次(此式練發放外氣)。

正功六陽式

二郎擔山:下肢成馬步,兩臂向兩側平直推出,掌心向外。按順呼吸法呼氣,意想丹田之氣由任脈上,分兩支過腋下沿臂內側奔瀉至勞宮穴。吸氣,以意領氣,由臂外側至任脈回歸丹田。如此反復練習,至力盡時止。初練此式五分鐘便不可支持,以後隨功力增大加至30分鐘時,轉下式練習(此式主練臂力)。

金剛擰繩:功前先擇一把圓筷,用一同手握同寬的鐵箍箍緊。練功時,雙手握住兩端,下肢成馬步。採用順呼吸法。吸氣,意守丹田。提肛,呼氣,意想丹由之氣由任脈上分兩支過腋下,由臂內側達內關穴,同時雙手用力擰絞圓筷,提肛。再吸氣,以意領氣由臂外側回歸丹田,提肛鬆手。如此呼吸反復練習。當雙手出繭時就可轉下式練習(此式主練腕力)。

青龍飲水:全身俯臥,用雙掌與腳尖支持全身,腰臀後挫。採用順呼吸法。吸氣,用唾液吞送丹田,噴氣意想丹田之氣由任脈上分兩支過腋下由身內側至雙掌勞宮穴,同時兩臂屈曲身體前探,提肛。當頭將觸地面時吸氣,以意領氣歸返丹田,同時頭上抬,直臂、腰臀後挫,松肛(不可太快、過慢),如此反復練習。初練時,十多次便覺兩臂疲乏,以後隨功力增長增加次數至行功百次時轉下式練習(此式壯臂力)。

靈猴攀技:功前先擇一與身高平齊的單杠。功時站立於單杠邊,採用逆呼吸法。吸氣,以意領氣注入內關穴,同時兩臂用力下壓身體上升,提肛。至頸與杠齊時,呼氣,以意領氣回歸丹田,松肛,身體下降。如此呼吸反復練習至力盡時止。當可行百餘次時此式功成(此式主練臂力)。

東海揚塵:功前選擇一口徑二尺左右,深一尺半至二尺的敞口缸,內盛清水,底輔沙一寸餘,練功時用物墊高。下肢成馬步立於缸前,採用順呼吸法,先吸氣用唾味送人丹田。再噴氣,以意領氣至掌心勞宮穴,同時雙手成荷葉掌猛擊水面,提肛。吸氣,以意領氣由臂外側歸返丹田,松肛,同時雙掌離開水面。如此呼吸反復練習至力盡時止。日久後雙掌拍擊水面將缸底沙擊散時,此式功成(此式練掌面)。

羅漢排山:功前先做一兩人高的木架,上吊30公斤至50公斤鐵砂袋,砂袋高與胸齊。採用順呼吸法。吸氣,用唾液吞送丹田。噴氣,以急領氣至勞宮穴,同時兩手成荷葉掌猛烈拍擊砂袋,提肛。再吸氣,以意領氣回歸丹田松肛。如此呼吸反復練習至力盡。初練時沙袋僅微微擺動,日久功深後雙掌可將砂袋拍出一二丈遠時,天罡掌陽勁告成,可轉入陰式練習(此式加深陽勁功力)。

正功三陰式

風起雲湧:盤膝坐下,脊柱正直,兩臂向前乎伸,掌心向外。舌抵上齶,口眼微閉,提肛,意守丹田五分鐘,採用逆式呼吸法。吸氣,意想天地精華之氣由雙手勞宮穴吸人,沿臂外側集在丹田,提肛,同時雙臂緩緩收至胸前,呼氣,以怠須氣沿臂內側至勞宮穴向外穿透發出,同時松肛,兩臂緩推至直。如此呼吸反復練習至力盡。當練功時感覺雙掌心有微風輕拂感時,可以轉下式練習(此式陰勁產生)。

雷霆萬鈞:功前先在桌上立蠟燭兩支點燃。下肢成馬步,立于一米外。脊椎正直,雙手成拳置於腰邊。採用逆呼吸法。吸氣,用唾液吞送丹田,收肛、噴氣,雙拳成掌向前猛推拍擊蠟燭,發〕隆鄙鶏松肛,以意領氣至勞宮穴穿透而出至燭火。吸氣,雙掌成拳回收至腰邊,意想天地精華之氣由勞宮穴吸人收集在丹田,同時提肛。如此呼吸反復練習。當推掌便可將雙燭撲滅時,就增加距離,至雙掌在3米外乃可撲滅雙燭時,此式功成(此式加深陰勁之功力)。

寒風透骨:在一張透明紙後點兩支蠟燭,站在3米以外撲擊(練法同上式)。至推掌便可撲滅雙燭而紙不損時此天罡掌陰勁成(此式練陰勁之透力)。

收式:與“百川歸海”同。

陽式洗手方:地骨皮、食鹽各半煎水熱洗。初練此功無根基者每日功前功後均用此方洗手。

陰式引神方:麝香9克,硫磺3克,吳茱萸9克,陽起石9克,於薑6克研細末,練功時用此藥末和蜜貼在掌面上,此方有強且快地導引丹田氣貫注掌心之效。

注意事項:

練正功時必須紮腰帶,戴護腕。

婚者百日內禁房事。

功後不可立即接觸冷水。

每次練功至力盡時方有效。

練功不可“三天打魚,兩天曬網”。

若為治病,練預備功就可。

練功一段時間後能內氣外放,但不可妄用。

預備功,收式必須每日練。

此功的預備功、陰式、陽式皆可單練。

練此功百日就有了功夫,但須牢記武德,不可隨意出掌傷人
武当龟息功
[ 作者:九真道人 来源:道教茅山法术网 点击数:1412 更新时间:2007-7-20 文章录入:九真道人 ]
【字体: 字体颜色】





龟息功是武当道家惨炼内功的一种修为功法,又名“玄武定”“龟息真定功”,由潜心、潜息、真定、出定四部分组成。现将修为方法列述于下:

一、潜心
潜心,即调心,此法为龟息功的预备功法,初学者采用盘膝跌坐,上体正直,务必使全身放松自然,松则气顺,百脉舒畅。双手扣子午扣,即左手拇指弯曲掐住中指午位(最上端),右手拇指由左拇指、中指圈内插入,掐住左手无名指根部子位,右中指在对相对掐住,两手相抱放在小腹前,扣子午有少生杂念,有助入静之功效。二目垂帘,服观鼻。鼻观口,口观心,舌抵上胯,心、神、意守脐部,务使心念不移。久之感觉随心意降,头、手如同虚无,只觉脐中一点真息幽幽出入,移之不动。此时第一步功乃成。



二、潜息


潜息即为龟息,始人正功。坐式同上。此时振动鼻腔,深吸气。觉气人腹脐之中,吸端八分气即可,不要吸满。气进入腹中后,心念又下移,如同上式,宁心静气,住息。住息时可用数息法(并非数呼吸次数),仅默数。住息者对于初习者,必不习惯,马上觉得气息憋闷,感觉也上澎,气息上浮,废气欲出,胸咽憋闷。初练者便可出气一口,再吸气如前。练功时间一长,忍度越长,气也越长。此时,如感叹人之气欲出时,放松小腹,心念下降,息亦下降憋闷之感便消失,但迅即又至,依法再放松小腹,下降心念。初习肴如确感气憋不住了”,出气一口。练功时间一长,反复升潜次数亦可逐渐增加。但切记莫初习即憋长久,总要循序渐进。每次吸气,吐气一次,即闭放一次为一息。出气后调匀呼吸再行吸气。每次练习,至少要七息,至多四十九息。修炼至息潜人腹,不急不憋,久久安然,则第二步功成.



三、真定


上面二法修成,始参入定。姿势坐卧都可。纳降气息,守心意念于脐,至心息全都忘却唯有一灵知感存于脐内中空之窍,久久不动,渐入真定。真定即为龟息上乘功夫,但习之不易,《道家·太极门》诀:“于无而静,自然而定,无知有灵,乃入真定”。真定的境界只有过来人方可领悟。


四、出定


当功夫达到“定境”时,收功之法就应当以“法”来掌握。按时出定方法也很简单。每个人手上有22个关窍:无名指根部关节为子时关窍,中指根部为丑时关窍,食指根部关节为寅时关窍,食指第二关节为卯时关窍。食指第三关节为辰时关窍,食指尖端为午时关窍,无名指尖端为末时关窍,小指尖端为申的关窍,小措第三关节为酉时关窍,小指第二关节为戍时关窍,小指根部关节为亥时关窍,如果打算练功在哪个时辰“出定”,可将双手握拳,拇指尖扣住那个时辰关窍,平放于腿上,则可以准确“出定”收功。


注:修本功定境后,由于真气前引,藏心纳息之故,常见身体前俯,双掌重叠枕额成龟卧式。此为正常现象,随其自然,对气的修为大有裨益,不要惶恐。






红砂吸魂掌(北少林圣禅门) 发帖心情 Post By:2003-3-8 5:05:00

红砂吸魂掌为北少林圣禅门九大绝技之一,全功分上、下两部。上部为筑基功,修1个月即可发放外气,修3个月中原地区可单衣过冬。笔者去年修练两个月因故停功但仍能以1件秋衣1件毛衣1件外套越冬,即使下雪也不觉冷。下部为上劲四式,在上部练百日后,每早修炼8天即可令树叶随掌而动。1年功成可开砖裂石,隔空布气制敌、毁物、活肤碎骨等常人难以想象的功效。 上部---站桩筑基功:两脚左右分开,脚内侧略宽于肩,双足平直,脚尖稍内扣,双腿屈膝半蹲并稍内收,大腿尽量与地面保持水平,膝不冲出脚尖,上体正直。百会稍上领,下颏微收,百会与会阴同在一条直线上。口微闭,舌抵上腭,双目平视,直腰收胯,两脚用暗劲外撑身体,重心落在两足中线连点上,两手下垂放于两胯旁,两臂似直非直,手心向下,指尖向前,五指自然分开,掌心内含。 呼吸:采用自然呼吸,熟练后采用逆腹式呼吸效果更好。 意念:吸气时想气从两掌心、两脚心源源不断的进入膻中穴及后面的一个空间(中丹田)。路线:两脚心至两腿中心至下丹田至中丹田;两掌心至两臂内侧至腋下至中丹田。两条路线同时吸。一呼一吸为一次,呼时意想中丹田。练足49次(可循序渐进每次加7次即从7次至14次直至能练满49次),接着变为两臂朝前平行伸直,手心朝前,指尖向上,意念同前练足49次;尔后,两臂左右平行伸出,成一字形,手心朝外(左右)指尖向上,意念同前练足49次;最后两臂向上直举,手掌托天,指尖向后,意念同前练足49次后收功。 收功:两手掌心贴肚脐,左手在里右手在外(女士相反)意守丹田5分钟左右,顺时针揉摩36次,逆时针揉摩36次,拍打全身,散步。 下部---上劲四式:预备式同前加意守丹田5分钟。 第一式:指尖上翘,掌心向下,两臂内转,使两手手指尖在胯前相对。吸气时全身放松,双掌上提胸前(指尖仍相对),呼气时双掌用劲尽量下压,接着吸气放松全身。双掌上提胸前,呼气时双掌比上一次更用力下压,以后每次呼气均要加劲,吸气一定放松。开始可练7次,随着劲力的进展每天加一次,至49次止为宜。以下各式均如此循序渐进。 第二式:两手直臂向前成立掌状,掌心朝前,指尖向上。吸气时双掌收回胸前(掌心仍向前),呼气尽量用劲前推出至平直。每次吸气均放松,呼气用劲如第一次,之后劲一次比一次大。 第三式:两手立掌直臂向左右分开,掌心朝外,指尖朝上。呼气放松全身双掌缩回至双肩外旁,呼气时用劲尽量外推,要领同前。 第四式:两手托掌直臂向上伸直,掌心向上,指尖朝后,吸气时全身放松双掌至肩上。呼气时双掌用劲尽量向上托举,要领同前。 说明:以上四式吸气时一定要全身放松,配合动作意想吸气时从两掌心、两脚心入中丹田(路线同筑基功),呼气时意想中丹田气涌入下丹田,同时两掌用劲下压、前推、侧推或托举。每隔8天可试功1次,面对树叶、厚纸或纸团等1米左右,意念呼吸要领同前练功,呼气时意想推物体。 注意事项: 1、练筑基功百日方可练上劲四式,并应分时继续练筑基功。筑基功最佳练功时间为早上3-7点。 2、练功后30分钟内手不要接触冷水,最少也要隔15分钟。 3、百日内禁房事,遗精或房事后第二天不得练上劲四式,否则伤身后果自负。 4、练上劲功1月后不得随意与人交手,以免伤人。 印辉 (471002 河南省洛阳瀍河区东关街153号 孙少辉)《少林与太极》2000.2

Saturday, July 26, 2008

决定外汇储备的人没py,代农民咒他

看了天涯一篇农民父母双抢,忙一个月,烈日,蚂蟥,重体力。。

外汇能不能买些小农机(手提式)来,免费借给农民用

Friday, July 25, 2008

bo的分析思路/ Scoring points: how tesco is winning customer

ABC分类法,与其用36种大类别,不如用a高值低量,b低值高量

与其用频率,不如用频率变动。

把变量换一个表现形式

RFM each give 5 bins, then clustering (three result clusters). then use decision tree to derive their attributes. (worked in bank)



Scoring points: Tesco与客户的故事

1。不是咖啡店(客户数据没太大净利润),所以不要自动发卡,要客户承诺(申请会员卡)
2。不分金卡,所有人的回馈率一致(不能像航空公司做分层;告诉多数低价值客户他们的价值不佳)
3。定期自动结算回馈,客户不能控制如何兑现

4种回馈:point,折扣,信息,优先。可以组合起来

回馈点的价值需要稳定,因为也是一种货币。著名失败案例BEENZ,kibu, go.com

会员(折扣)可能导致忠诚者的利润降低(折扣太大),preferential pricing.但可以把大折扣给在高利润商品上

信息:酒俱乐部,亲子俱乐部

特权:贵宾候机室;AmeriExprs的Gold Charge商务卡

Sunday, July 20, 2008

从商业角度分析的数据项目:

对客户分类:重复购买者有没有特征?大额购买能否重复?追查不重复的原因?

重要着眼点:每平米利润率?成本?库存结构(资金沉淀)?周转率?不同商品周转?

季节性?客服成本?购买行为(动机,篮子构成?商品间关联度,销售带动情况)


写:方法论(如80/20),发现的规律,发现的反常

结论(如果没有好的结论):目前系统不能提供insight;系统粒度太大?没有后续的数据过滤(识别MagicCard柜台打折卡)

包工头的贡献小(比CREDIT CARD),那么其管理成本是否高?

如何(从连续的交易)鉴别工程:除了sequence,也可找出一些工程用的东西(浴池),如果购买,则有工程ONLY analyse MOST profitable customers at beginning!!

AHP决策:

[FLASH] 第8章AHP决策分析方法 本章主要内容 AHP决策分析的基本原理与计算方法 ...File Format: Shockwave Flash
美国运筹学家T. L. Saaty于20世纪70年代提出的AHP决策分析法(analytic hierarchy process, 简称AHP方法),是一种定性与定量相结合的决策分析方法。 ...
jpkc.ecnu.edu.cn/0802/study/8-1.swf - Similar pages



列出比较矩阵,9是一个比另一个极端重要,1是一样重要,3是i比j弱,5是i比j强些,7是明显强,9是太强。

对称头则1/3,1/5等类推

按列,算比例,按行累加,则为最终权重

ahpexmaple.pdf

Saturday, July 19, 2008

Sales analysis

同比:和去年同期
环比:和今年上(月/季/周)比

.
Larry Goldman此文相当有水准,提到难于从销售数据(库)得到的两种东西:产品相关性(product correlation),活动不存在的描述(nonexistense of activities)



Database Design is Difficult
Customer intelligence
Larry Goldman
DM Review Magazine, January 2008
I regularly hear the complaint, “I can’t get basic information” from the business side - particularly from sales and marketing individuals. Often, the phrase is distorted. Sales departments typically have access but haven’t spent the time to learn how to download the report or use the cube. Often, they have forgotten their training, don’t remember where or how to access the reports and have given up. Marketing departments can usually get the basic information, but “simple” or “basic” is often defined differently between the customer intelligence team and marketing.



Customer analysis is very difficult. Many relational databases and database designs are not able to handle some of the difficult concepts supported by customer analysis. Time series, product correlation and the nonexistence/existence of activities make analysis very difficult. Many times these requirements come across as ad hoc requirements that may be ignored in order to optimize for scheduled management reports.



To avoid being blindsided by these requirements and requests after implementation, those responsible for marketing databases should consider the following types of analysis:



Multiproduct or service relationship to the customer.
Identifying specific events that have occurred over time.
The nonexistence of events, transactions or behavior within the database.
Product Correlations




Though all sales and marketing databases are customer-centric at their core, the first question from a customer analyst is, “What did they buy?” This simple question is usually easy to address, as is the management report that shows sales, quantities and profitability broken down by product category or division. The more difficult questions include:


How many product categories do specific customers use or purchase?
Which product categories do specific customers use or purchase?
What combination of product categories do specific customers use or purchase?
How many customers use a specific combination of products?
What type of customers use specific combinations of products?
Many organizations focus on the breadth of their product lines. Typically, there is a huge opportunity to drive more usage across product lines rather than push single products to individual customers. This is true for telecommunications companies who want their customers to use landline, wireless and DSL or technology companies who want to sell computers, printers, modems and other accessories.



Time-series events and time series help us understand how fast customers adopt or respond to new marketing pitches. Common questions include:



How fast did the customer get to a certain revenue level?
How long did it take the customer to buy their second product?
How long has it been since the customer’s last purchase?
What is the average time between purchases?
After introducing a marketing program into the field, how long did it take customers to adopt?
Like product relationships, it is difficult for databases and query tools to compare dates in between records in an efficient manner. It can also be difficult to know which record to compare to in a high-transaction environment. Clickstream analysis poses this problem. Correlating browsing to buying is a very disconnected process. It is difficult for query tools to correlate massive amounts of clickstream data together without help from the database.



All dates need to be documented across transaction and customer records. Last purchase date, last login date and last clickthrough date should be precalculated instead of calculated on the fly. You may find yourself attaching dates to a high percentage of fields in your customer record. It is just as important to know when a customer purchased from a specific category as the last time they purchased from that specific product category.



Nonexistence of Activities



As already stated, it is hard enough to identify specific events in the database. It becomes even more difficult to identify what the customer has not done yet. For example:



Who has purchased product category A but not category B?
Who has not responded to any of the cross-sell campaigns?
Who has not logged into the Web site in a while?
These types of queries make it very difficult to discern inaction - which is just as important as actions that have happened.



Database designers must predict difficult queries from the beginning. Scenario and query prototyping should be standard operating procedure so you don’t find out during testing that your data model can’t support the queries. Hardware and software changes and modifications are not the answer. We have seen not-so-sophisticated marketing departments bringing large-scale IBM and Teradata systems to their knees with straightforward data models. To avoid the “I can’t get basic information” complaint, you must simulate these difficult queries and work with the business on how they will approach certain list selections or analysis. You can’t defend against creative marketers because they will always find the killer query. Just try to make sure everyone agrees on “killer” versus “basic




Granted, the target systems (DW, data marts and cubes) contain the data obtained from the source systems, so it does make sense that the content is similar. Similar content is not the problem; a similar physical design, however, is. Rather than applying the best practice design techniques they’ve learned to support DW and BI, people copy the underlying enterprise applications’ designs to the DW. This process propagates the limitations of the enterprise applications for reporting and analysis without taking advantage of DW best practices such as dimensional models or the hub-and-spoke architecture. History keeps repeating itself, resulting in frustrated businesspeople who aren’t getting the information they need. Meanwhile, IT is wondering why BI is not yet pervasive


Reporting and analytics require two data areas: hub and spoke. Most people understand the need for and benefits of building a data warehouse - the hub. You gather all your source data, cleanse it and make it consistent. You store your historical data once in the data warehouse and then distribute that data many times throughout your enterprise. “Create once and use many times” is the mantra you should follow in creating applications, data and services. It is efficient, and it is the most productive approach to support reporting and analysis. Most enterprises accept this as a best practice.

But it is equally important to create data marts or cubes from the data warehouse to enable reporting and analysis. These are the spokes. The benefit is that it is more efficient to create data marts oriented toward a business process or a group of businesspeople than to continually reinvent the wheel every time you create a report. Once again, I am talking about “create once and use many times.”



Integrating Web analytics with CRM system should come as no surprise as an important step to tying the entire lead generation and sales process together. The three main reasons for integrating Web analytics with CRM are:

Better marketing investment prioritization (both time and money ROI)
Measure marketing’s contribution to the sales pipeline, and
Enable sales intelligence for improved selling context.
Businesses and their marketing organizations that promote products or services with complex sales cycles often lack visibility beyond generating the initial sales leads from their Web sites, trade shows or other offline initiatives. For example, once Web leads are generated, they move into the black hole of the sales force automation (SFA)/CRM system with almost no ability to tie results such as closed deal quantities and sales values back to the marketing campaign costs, thus leaving fully measured ROI (or return on marketing) unan­swered. To provide concrete answers to these ROI and other related questions, marketers should endeavor to integrate their Web analytics/campaign management solutions together with their SFA/CRM applications






Mikel Chertudi is the senior director of Demand and Online Marketing at Omniture. He oversees global strategy and execution of demand creation for new customer acquisition and cross selling strategies including the tactics of search (both paid and SEO), email, newsletters, display, content syndication, direct and dimensional mail. Chertudi is responsible for thought leadership-based marketing including best practice guides, Web seminars and reports. He and his team have deployed a comprehensive marketing technology-based solution for increasing response effectiveness by intertwining search marketing automation, email and direct mail lead nurturing automation, progressive telephony, on-site behavioral targeting, ad serving, A/B and multivariable testing, personalized prospect portals with Web analytics as the anchor technology to deploy a highly relevant prospect and customer experience.

For more information on related topics, visit the following channels:

Analytics
Customer Relationship Management (CRM)
Web Analytics

Retailers Using Analytics are Outperforming Rivals此文提到了几个大企业tesco, best buy, walmart等的实践

Dealing with Data
Greg Todd

Clive Humby Scoring Points : How Tesco is winning customer loyalty,此书有点意思,讲到了会员卡属于零和游戏. Club card之后,如何从昂贵,被新父母信任的boots那里抢婴儿市场:获取inner circle信任感,通过建立baby club card




Turning Customer Data into Profits

In summary, the consolidation is disappointing as much as it is exciting, and you may or may not benefit as a result

After a positive premiere, the marketing system started to lose steam. Within three months, some common themes started surfacing:

Standard metrics on customer usage reports were not matching between the data warehouse and other reporting systems.
Reports from the analytical data mart were not matching to the data warehouse.

Nobody seemed to be looking into the above issues.
Nobody seemed to know whom to call to have these issues investigated.

数据定义需要注明加载周期,以免用户误会For instance, customer count may be defined as the number of customers for a specific set of criteria. However, further information such as, "this field is only populated once a month" is also needed, or the user may assume it is loaded daily like the rest of the data warehouse. Many instances of this type of missing "use" information had users running erroneous queries


销售经理管理工作的十大忌语

该提问已过征答时限 悬赏点数 10 征答截止时间
外企面试之前让我先交一份该产品的销售分析报告,请问怎样写,通常包括哪些内容?谢谢
举报 60.19.201.*







独爱人山
评价较低的回答。 点击可以将其展开。
是什么产品啊?
这里有个彩电市场销售报告,
可以免费下载的。
你只要把产品名称,还有一些特征之类的稍作修改就可以了。

引用:
举报 222.128.6.*







四条棍99
评价较低的回答。 点击可以将其展开。
市场竞争状况和
销售数据的收集
无论是文字报告或者口头报告,最令人“无地自容”的就是被上级领导问时一问三不知,满口“也许”、“可能”、“应该”、“大概”和“似乎”。
一份标准的销售报告可能用到的数据有:
1.市场规模、市场容量以及增长率;
2.主要竞争产品(最好是分品项)的销量和增长率;
3.自己产品各品项的销售目标、实际销售量和增长率;
4.主要竞争品牌最近的新产品上市、促销及陈列动态(越细越好,至少把活动的通路、区域、产品和活动效果搞清楚);
5.各经销商各产品的进货、销售和库存状况(别忘了把在途的产品也考虑进去);
6.各经销商和直营客户的账款情况明细;
7.报告期区域内主要促销活动、陈列活动、铺货行动的执行状况、效果评估;
8.辖区内的营销预算及实际使用、节余状况。
真是好大一堆资料呀!难怪很多一线销售主管反映:不怕跑断腿、就怕做报告。
其实,以上这些数据收集起来并没有那么困难,关键是个习惯问题,因为很多资料根本就不用自己去整理。有些“老道”的销售主管会在销售会议前很早就把需要的资料、表格列一个单子,交给内勤或副手“打理”,甚至可以让他们进行一些初步的分析,他们可能比你更清


绩效与销售分析
Chapter 8
本章学习重点
建立资料清单.
使用表单功能修改或增删记录.
资料的排序与筛选.
Excel的小计功能.
使用条件加总精灵完成加总运算.
绘制枢纽分析表与枢纽分析图.
导读
本章将会利用 Excel 来分析与统计产品销售资料, 并建立销售业绩排行榜和销售统计图表, 以便公司的高级主管能够根据这些统计分析的结果, 了解各项产品的销售状况, 进而研拟适当的行销策略.
8-1 建立业绩资料清单
介绍的功能:
建立Excel清单资料.
在清单中新增订单资料.
认识自动延续清单的公式与格式功能.
使用的范例档案:Ch08-01
执行结果:
此处可加上冻结线, 让标题栏保持显示在画面上
在清单中新增两笔记录
8-2 利用表单功能增删记录
介绍的功能:
使用表单功能在清单中新增记录.
利用表单功能来编辑清单资料.
使用的范例档案:Ch08-01
执行结果:
使用表单功能所增加的一笔记录
8-3 制作业务员业绩排行榜
介绍的功能:
使用自动筛选功能找出2月份的订单记录.
计算业务员的销售总额
依业务员编号做排序.
依业务员编号做分组小计.
制作业绩排行榜
使用储存格参照.
自动填满数列.
使用的范例档案:Ch08-02,Ch08-03
执行结果:
观看成果档:h08-04
依照业务员的销售总额来排名次
8-4 使用条件式加总精灵
统计销售量
介绍的功能:
从增益集安装条件式加总精灵
执行条件式加总精灵来计算产品销售量
步骤一:指定加总范围
步骤二:决定要加总的栏位与设定加总条件
步骤三:选择计算结果项目
步骤四:指定计算结果的存放位置
使用的范例档案:Ch08-05
8-5 建立销售统计枢纽分析表
介绍的功能:
使用 枢纽分析表及图报表 功能建立一份「产品-地区」销售统计表
改变枢纽分析表栏位的资料显示方式
如何以滑鼠拉曳的方式, 新增与删除枢纽
分析表的栏位
使用的范例档案:Ch08-06
执行结果:
产品-地区销售统计枢纽分析表制作完成
8-6 绘制销售统计枢纽分析图
介绍的功能:
使用 枢纽分析表及图报表 功能建立一份「产品-地区」销售统计横条图.
美化枢纽分析图
加上图表标题.
加上资料表格.
改变标题的对齐方式.
使用的范例档案:Ch08-07
执行结果:
观看成果档:Ch08-08
产品-地区销售统计横条图制作完成


中国加盟网 > 环保 > 日常经营 > 环保行业:销前的销售分析环保行业:销前的销售分析品牌加盟网 2008年3月24日  谈到促销,多数企划人员首先想到的是搞什么活动来吸引人气。把活动放在**位,而不是把销售分析放在**位,笔者认为这是错误的做法,很可能会成为因促销而促销。

  那么,该如何进行销售分析呢?

  市场环境分析

  假如企业所处的市场是地市级市场,那么市场一定会有一个参照市场。何谓参照市场?就是销售节奏稍微快于本区域的市场。简单地说,上一期参照市场的销售结构和商品价格就是下一期本市场的销售结构和价格。那么当你做销售分析时,参照市场的上期表现就是重要的分析参照依据。

  这样的分析对于连锁企业来讲可能好办一些,因为销售记录是很保密的商业数据,任何一个企业不会轻易地透漏。所以地方上的单体零售企业就只能获得一些零碎的价格信息,通过这些信息预测市场的未来价格变化。

  销售的历史数据分析

  历史数据的分析一般只向上分析一个周期,比如今年“十一”的销售预测分析,只分析今年“五一”和去年“五一”、“十一”就可以了。历史数据分析的项目有:

  1、每个品类的各个品牌的销售占比
  2、每个品类中各个细分类的销售占比
  3、每个品牌的细分类销售占比
  4、每个品类的价位段的销售占比
  5、每个品牌的价位段销售占比

  通过分析找出每个品类的优势品牌、优势细分类和优势价位段、每个品牌的优势细分类和优势价位段,为找出重点促销商品做好基础。这些分析有助于操作者掌握消费者需求的结构和变化规律,不但对销售的静态结构有一个深入的认识,还要对销售的动态变化有一个方向上的把握。

  那么,历史数据分析常采用哪些指标呢?在用这些指标分析的时候应该注意以下问题:

  商品销售结构

  向细分层面理解销售的**个维度就是商品销售结构。商品销售结构是指企业在销的各种商品在销售额中的比重,它需要计算单类商品在销售总额中的比例,然后将所有商品的销售占比汇总成一张表格,就形成了企业的商品销售结构。

  计算公式为: 单品销售占比=单类商品销售额÷同期企业销售总额×100%

  计算企业的商品销售结构首先要对企业经营的商品进行分类,对于综合类的公司,可以按商品大类划分,如国美,可以按商品类别划分为冰箱类商品、洗衣机类商品、电视机类商品、电脑类商品、手机类商品、厨卫类商品、数码商品等;对于某一类商品,还可以按商品的细分类划分,如电视机类商品,就可以按显像原理划分为显像管电视、液晶电视、等离子电视、背投电视等,还可以按屏幕大小划分为21寸、25寸、29寸、32寸、42寸等。根据不同维度计算出来的商品销售结构反映出来的问题不一样。

  同比增长率

  同比增长率是指某一方面(销售、利润等)实现的结果和去年同期对比的增长情况。

  计算公式为:同比增长率=(今年数据-去年同期数据)÷去年同期数据×100%

  很多商品的销售都有季节性,或者说具有周期性,今年5月和去年5月影响销售因素的作用力的结构和强度大致相当,这是使用同比增长率指标的前提假设条件。

  但是,事实上这种假设条件只是在一定范围内成立,今年和去年总是有某些方面存在不一致,所以在使用同比增长率指标分析销售时,一定要附加定性的文字分析,辅助说明具体情况。

  例如,2006年春节是在2006年1月29日,2007年春节是在2007年2月18日,这样,2007年1月的销售肯定与2006年1月的销售差很多,而2007年2月的销售又比2006年2月好很多,所以不能只看到2007年1月比2006年1月差很多就做出销售很差的判断。这个例子很明显,还有一些情况比较隐蔽,不太容易判断。比如,2006年8月某家电商场的厨卫商品销售比2005年8月下滑一大节,怎么分析也找不出原因,最后找到原店长和柜长才弄明白,原来2005年8月这个门店周边有几个小区交付使用,业主都到这家家电商场购买厨卫商品装修新房,使得销售冲得较高。所以在使用同比增长率指标分析销售时,一定要仔细对比前后的具体情况,然后再做判断。

  用同比增长率指标分析销售的缺陷就是时间上相隔较远。

  环比增长率

  环比增长率是指某一方面(销售、利润等)实现的结果和上一期对比的增长情况。

  计算公式为: 环比增长率=(本期数据-上期数据)÷上期数据×100%

  因为时间,前后相隔不远,影响销售的因素变化不大,使得销售前后期有一定的可比性,这是使用环比增长率指标分析销售的前提假设条件。环比增长率指标克服了同比增长率指标时间相隔太远和一些隐性的变化无法辨明的缺陷。但是它也有不足:不适合周期性强的商品的销售分析。

  另外,节假日销售明显的商品也不适合环比增长率指标的分析。如北京市的家电商品大都集中在周六日促销,使得周末两天的销售在一周销售中占比较重。如果一个月周末数较上个月少,环比增长率指标就会受到影响。

  因此,不管是同比增长率和环比增长率指标,都有一定的缺陷,仅用指标本身的分析不能反映问题的全貌,都必须结合实际情况才能深入理解销售的变化。

  同期环比增长率

  同期环比增长率是指去年同期某一方面(销售、利润等)实现的结果和去年上一期对比的增长情况。

  计算公式为: 同期环比增长率=(去年同期数据-去年上期数据)÷去年上期数据×100%

  同期环比增长率实际上是评价环比增长率的参照数据。

  【例】某企业2006年8月销售额是60万元,9月销售额是80万元,而2005年8月销售额是50万元,9月销售额是70万元,计算2006年9月的同比增长率、环比增长率和同期环比增长率,以及2006年8月的同比增长率。

  2006年9月同比增长率、环比增长率和同期环比增长率分别为:同比增长率=(80-70)÷70×100%=14%;环比增长率=(80-60)÷60×100%=33%;同期环比增长率=(70-50)÷50×100%=40%。2006年8月的同比增长率:同比增长率=(60-50)÷50×100%=20%

  显然,在3个指标中,环比增长率和同期环比增长率更具可比性,2006年8月的同比增长率和2006年9月的同比增长率具有可比性。上例中,2006年9月的同比增长率看似很高,但是和2006年8月的同比增长率相比不是很理想;2006年9月的环比增长率为33%,和同期环比增长率40%相比,还是有差距。所以,对销售的评价应该是多方位的。

Wednesday, July 16, 2008

OUTPUT your job! Excel manner

Formatting an Excel:

1. put comma, between thousands, millions, 1,000,000, not 1000000, get rid of fraction as well. not 1000000.11, but 100,000

sum(A1:A7), this function when dragged right will adapt to sum(B1:b7)

=IF(ISERROR((1/ SEARCH("15",$A2))),0, (SEARCH("15",$A2)/ SEARCH("15",$A2))) THIS WILL KEEP constant to right, will change to $A3 when dragging down

(after paste, pasted is selected and you can replace anything within this selection)

$A$1??

Saturday, July 12, 2008

Icann将开放自由根后缀?

据computing的Neon Kelly文章,Icann正在放开包括.com, .co.uk在内的域名后缀限制。理论上所有人都可以申请成为注册商,并且自由创造诸如.car .love .我爱卖什么就建什么 之类的根域名后缀。

注意不是域名而是根域名后缀哦

消息2:easyjet预见到员工自带个人通讯设备(可以绕过公司网络管制,比如上facebook)将不可避免,并且必将接入公司环境,认为可以设法变害为宝,对这一不可避免的现象进行利用。

Wednesday, July 09, 2008

my CLEM experience

CACHE usually brings trouble: could leads to ORA-12704 character set mismatch error

MUST use Select and Filter, to reduce possible data reads, only read data/columns that you NEED to produce your result (average trans count or whatever).

CARMA is quicker than Apriori, and can adjust support when viewing result?

ID in Sequence node is people'S ID, not transaction ID.

Spend per visit distribution: X-axis is increasing value of spend/visit, Y axis is the number of people on that spending level.

good diagram2
Lift like: X axis is % of customers, Y axis is a percentage, many curves can be used to show how many % of customers produced the Y value on the curve

按f3 to delete all connection of selected node,
press middle button and drag to create a connection; double middle click to bypass a connected node

Pareto图: Descend sort Sales(98lines from B2 to B99), create a column, create function (say for C3=B3/SUM($B$2:$B$99), then format C3 as percentage; create another column D, D2=B2, but D3=D2+B3, then drag down(cumulative of B). THEN draw custom chart, with 2 axis column/curve chart

Excel求和:SUM($B$1: $B$99), dot or comma may be used but comma means only those (not between)

可以把Quest , c5.0模型串起来,同时输出2个预测值,不影响,还可以比对lift

RFM Node: each field map to a 5-bin field, and score has a weight of 10, so a top R,F, second bin M customer will have final score: 5*10+5*10+4*10=140 ( lots of others got 140 score as well)

PROFSET作用:某商品的利润不能用自身利润,而是要看其出现的商品组合的总利润.该组合的利润将按组合的出现概率(3次AB,AB,BC里2次AB,AB)指定给该组合.可以改变218种产品里54种的利润排名头号。 (所以促销时,促销这些隐性利润带来者而非显性单项利润最大者?

可以一边把处理完数据导出到库,同时另一分支处理(data audit, web, distribution),这样可以节约周末时间

In essence, mapping data results simply in the creation of a new Filter node, which matches up the appropriate fields by renaming them.

参数使用

Parameter Value Long name
Train.time 5 Time to train (minutes)
Sample.rand_pct 10 Percentage random sample

Note: The parameter names, such as Sample.rand_pct, use correct syntax for referring to node properties, where Sample represents the name of the node and rand_pct is a node property. See Properties Reference Overview for more information.

Once you have defined these parameters, you can easily modify values for the two Sample and Neural Net node properties without reopening each dialog box. Instead, simply select Set Parameters from the SuperNode menu to access the Parameters tab of the SuperNode dialog box, where you can specify new values for Random % and Time. This is particularly useful when exploring the data during numerous iterations of model building.

Wednesday, July 02, 2008

高手Tim的Clem经验,及如何

建两个同样的数据源(哪怕merge同一表时),可以加快

http://www.kdkeys.net/forums/thread/7255.aspx

As a small Clementine user tip, try using multiple identical database source nodes if you are merging database tables (even different rows of the same database table). Don't access the data in one database source node and use Clementine streams to split the data and re-join back later. This will prevent SQL pushback and force Clementine to write temporary data to disk.

对聚类结果用性别,某状态等overlay来看各群中该状态在各点的值
As a general tip. I usually do clustering on customer behaviour data only (for me this is mobile or fixed line phone usage) and then 'colour'/overlay the clusters by socio-economic attributes (age, household income, number of children, marriage status etc etc..) in order to identify how the customers in each cluster are. You might also want to apply market research and surveys to samples of customers from each cluster to further enrich your understanding of the clusters - but only after clustering customer behaviour only. Well, that's my preference

我如何察看结果中是否有重复:用distinction,或aggregate,record count>1即重复。

如何察看2个结果的差集:用merge的反连接,注意第一个集要大于第二个才有结果

Am I under the "Curse of Dimensionality"?
Reply Quote Favorites Contact
Right now my Clementine is running a process, I've been working 12 hours straight with just some breaks for eating and going to the bathroom.

My objective is to assign a probability number to 1.750.000 records, being the probability of adquiring a product next month. Each month, only 5.000 of those records adquires te product. I have 150 usable fields, with many different kind of distributions, storage class, types, and so on. I even have one set field with 100 different values.


One of the things i've done is combine 4 months of history, getting about 500 fields (there is no relevant history for some). I derived new fields (a very, very long duty), for example, the mean of the 4 months, the delta X between the mean of the first 3 months and the last month, set values with for example 8 values for describing historical behaviours of flag fields (like getting 1 - 0 - 0 - 1 or 0 - 1 - 1 - 0, for example), and i arrived to about 250 fields. When using feature selection i could screen half of those 250 (importance: Important) and end up with 125 fields for a neural network.

I can use the neural network to get 88% accuracy with the 5.000 buyers and 20.000 sample non-buyers... but of course, when i test the model on the whole database it's just useless, i get many many non buyers with a near-to-1 value in the probability field (softmax method).

I really don't know what to do!!! I can't even try to find correlations between fields because there are so many, it's a very stressing work, imagine how could someone end if he sees a graph between every field, having 125 fields. And believe me i've tried, but i've got nothing.

I thought of using PCA Factor but using it without any data preparation i get no improvement in the modeling. And if i want to normalize in a scale from 0 to 1 every single variable............................... remember i told you i have them of all flavours, i would get mad. I can't even bin because i don't get stable categories and i can't trust in getting the same categories for another period of months.



I've already described the magnitude of my database... am i doing something wrong? What would you do in my place? Even if i managed to reduce the dimensionality of the fields and take it to a reasonable number for a single person (me) to explore... is it really possible to generate a good model in the context i described?

I'll repeat it: i have data from every month. About 1.750.000 records. About 5.000 of those buy the product "A" (for example, an insurance policy) every month. I have to assign a probability to each record for buying the product in the next month, a probability good enough so that if i say that these 100.000 clients have %50 chance of buying it, about 50.000 buys.

If you watch Lost... and remember Hurley needing someone to tell him he was cursed... well... i'm feeling just like him. I can't deal with this.


Thank you very much.










Report abuse Quick Reply

05-23-2007, 18:35 7259 in reply to 7255

TimManns



Joined on 01-09-2004
Australia
Diamond Member


Re: ¿Am I under the "Curse of Dimensionality"?
Reply Quote Favorites Contact
Hi,

Always tricky to help with these types of questions...I'll try...

What is your gains or lift chart of the scored Neural Net model like?

It sounds like you are doing similar stuff to some of my monthly tasks. Your data processing steps sound exhaustive (in a good way) and everything sounds sensible.
I run prediction models, whereby I assign a probablity of churn and also probablities to churn to each specific competitor (we have maybe just 3 competitors) (particular customers of certain age, demographics and behaviour profile are more likely to go to certain competitors). I don't get brilliant results for the competitor probablities, but the gains charts are acceptable. In my gains charts, at the 25-30% customer base point we have a gain of 60% (that's double the random of 30%). Our lift charts are pretty good.

To be honest, does it really matter what your classification prediction is? As long as your top n% of scored data has a much higher incidence of the correct outcome. I have some projects where my scored data predicts 20 or 30% incidence of my outcome, but only 3 or 5% incidence actually occurs (I'm talking about churn btw). The top 5% of my scored data contains a very high proportion of the actual outcomes, so I'm happy. Our marketing campaigns only ever use at most the top 10%. My model accuracy over the whole base is maybe just 65-70% because I'm over predicting (false postives).

If you are getting a good looking lift or gains charts, then use this to present your results. Any campaigns to target customers should be selectively contacting the top n% from your base. If your top 3% probablities actually contain a lot of the target outcomes then you are doing fine.

In your case, you don't have to target every customer predicted above 50% chance. If you know that about 50,000 buys should occur, then simply sort by probablity to buy in descending order and target your top 50,000 customers (or 100k to catch any leftovers).

Weighting CART and using miss-classifications costs in C5 might help you use a little more data when building your model, and could help ensure you provide probablities and predictions near the actual level of incidence. Using slightly less balanced data for a Neural Net could help, but watch out for the Neural Net just giving one outcome.

i hope this helps a little...

Tim





Report abuse Quick Reply

05-23-2007, 21:54 7260 in reply to 7259

Arkantos



Joined on 05-07-2007

Bronze Member


Re: ¿Am I under the "Curse of Dimensionality"?
Reply Quote Favorites Contact
Tim i'm very very thankful for your help, it's really good to have someone on the other side answering questions.

Before i found this forum (... before i found you) i could just read and read books and stuff on the net and whatever i could find to learn, but it's so, so good to have a teacher.

Again, BIG THANKS, i'll get to work and i'll post again.





Report abuse Quick Reply

07-08-2007, 7:05 7322 in reply to 7260

JeffZanooda


Joined on 07-08-2007

New Member


Re: ¿Am I under the "Curse of Dimensionality"?
Reply Quote Favorites Contact
Did you adjust for the difference in response rate between your training sample and the entire population? Your population response rate is 5,000/1,750,000 = 0.29%, while sample response rate is 5,000/25,000 = 20%. Otherwise the model will overestimate the probability of response.


For logistic regression this is usually done by adjusting the intercept. Alternatively, if your software allows it you can attach weight of 70 to the non-responders.


Report abuse Quick Reply

07-08-2007, 19:47 7323 in reply to 7322

Arkantos



Joined on 05-07-2007

Bronze Member


Re: ¿Am I under the "Curse of Dimensionality"?
Reply Quote Favorites Contact
Jeff: thanks for your answer.

I'm using neural networks and I can adjust Alpha, Initial Eta, High Eta, Eta Decay and Low Eta. Adjusting any of there parameters should help to teach the model to expect much less response in real deployment?

I've been trying several response rates (leaving always 5,000 true values but changing the false values) for the modeling step, and I found out that the best models come from the lowest response rates (the closer to the real ones). Simple logic tells me that if I use even more false values I should get better models. In the future I will try this just to experience the result, as there are two drawbacks:

1) I get a very small amount of true values in the prediction.
2) The modeling time increases exponencialy.

Drawback number one has simple avoidance: I shouldn't care if I don't get true values in the prediction. As long as my gain charts are better than the others, then it's a good model. I just need the confidence value to classify the entries. Drawback number two also has simple avoidance: press Execute and go to sleep. So I'll guess I'll be trying this as soon as possible.

Thanks again.

Best regards.




Report abuse Quick Reply

07-08-2007, 21:21 7327 in reply to 7323

TimManns



Joined on 01-09-2004
Australia
Diamond Member


Re: ¿Am I under the "Curse of Dimensionality"?
Reply Quote Favorites Contact
This is a fyi...

I build a predictive model to identify likely churners for the subsequent month. I score a proportion (higher spenders) of our mobile customer base every month (for example on June 20th, forecasting churn for the whole month of July). Let’s say this is approx 3 million rows. Those customers with the highest churn score are contacted with a retention offer within the next few days. This might only be 10k customers, depending upon the available budget and workload. Contact methods change too, sometimes the customer may be called, other times a letter is sent. Our campaign delivery team are very fast so we have this luxury of a quick turn-around.

The model (a neural network) was built months ago using a sample of approx 10k churners and 20k active (random) customers. It is important that you balance the data prior to building the model. I normally update (re-build) the predictive model every few months as necessary. My current model as been performing well for 5-6 months now because we have not had any big changes in our market.

My predictive churn model predicts approx 8% churn each month. This is far higher than our actual churn rate, but the churn score is used to order the customers by ‘churn probability’, and this places the most likely churners at the top of the list. For this reason it doesn’t matter too much that the predictive model over estimates churn incidence.

I don’t play around with the neural network options much, preferring instead to apply comprehensive data manipulation. The final data set that I present to the predictive model is several hundred columns wide, containing an array of transformed customer call related data and account information. The data processing time for my analysis is approx 4 hours, involving accessing tables containing 70 million rows of call usage data per day. After the data transformations are complete the scoring of the customer base through the neural network takes approx 10 mins. This is because I have configured my Clementine stream to run the neural network as SQL, and all processing load occurs in our Teradata warehouse (a we have a huge DB system). I always ensure that any single analysis job can be completed within a working day, otherwise we consider it infeasible.

It sounds as though you have a similar process in place :)

I hope this helps

Tim